Jan 30 00:12:17 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 00:12:18 crc kubenswrapper[5110]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.518580 5110 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529476 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529503 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529509 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529514 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529519 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529525 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529529 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529534 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529539 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529545 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529551 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529556 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529561 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529567 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529572 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529577 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529583 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529589 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529597 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529605 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529612 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529618 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529624 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529629 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529670 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529678 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529683 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529688 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529693 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529699 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529703 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529709 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529713 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529718 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529723 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529728 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529732 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529745 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529749 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529754 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529758 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529763 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529767 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529775 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529779 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529786 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529791 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529795 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529799 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529804 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529809 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529814 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529818 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529822 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529827 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529831 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529835 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529840 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529844 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529848 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529852 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529857 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529861 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529866 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529870 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529875 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529880 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529885 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529889 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529896 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529900 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529904 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529909 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529913 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529918 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529924 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529929 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529963 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529970 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529974 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529978 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529983 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529987 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529992 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.529996 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530000 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530555 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530565 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530571 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530576 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530580 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530585 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530589 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530594 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530598 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530603 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530607 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530612 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530616 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530620 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530625 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530629 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530634 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530639 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530643 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530648 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530652 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530660 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530664 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530668 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530673 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530678 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530682 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530686 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530691 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530695 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530700 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530704 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530708 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530713 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530718 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530723 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530727 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530731 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530736 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530740 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530745 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530749 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530754 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530759 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530763 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530768 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530772 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530777 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530781 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530786 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530790 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530794 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530798 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530805 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530809 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530814 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530818 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530823 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530828 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530832 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530837 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530841 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530846 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530850 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530855 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530859 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530864 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530869 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530873 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530877 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530882 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530886 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530892 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530896 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530901 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530905 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530910 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530916 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530923 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530929 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530934 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530939 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530944 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530949 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530953 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.530960 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532764 5110 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532781 5110 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532790 5110 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532797 5110 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532807 5110 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532813 5110 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532828 5110 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532835 5110 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532840 5110 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532845 5110 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532851 5110 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532856 5110 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532863 5110 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532868 5110 flags.go:64] FLAG: --cgroup-root="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532873 5110 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532878 5110 flags.go:64] FLAG: --client-ca-file="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532883 5110 flags.go:64] FLAG: --cloud-config="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532889 5110 flags.go:64] FLAG: --cloud-provider="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532894 5110 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532901 5110 flags.go:64] FLAG: --cluster-domain="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532906 5110 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532912 5110 flags.go:64] FLAG: --config-dir="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532917 5110 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532923 5110 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532929 5110 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532934 5110 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532939 5110 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532945 5110 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532950 5110 flags.go:64] FLAG: --contention-profiling="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532955 5110 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532959 5110 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532966 5110 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532971 5110 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532978 5110 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532983 5110 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532988 5110 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.532993 5110 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533001 5110 flags.go:64] FLAG: --enable-server="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533006 5110 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533012 5110 flags.go:64] FLAG: --event-burst="100" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533017 5110 flags.go:64] FLAG: --event-qps="50" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533022 5110 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533027 5110 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533032 5110 flags.go:64] FLAG: --eviction-hard="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533048 5110 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533052 5110 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533058 5110 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533063 5110 flags.go:64] FLAG: --eviction-soft="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533068 5110 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533073 5110 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533078 5110 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533083 5110 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533087 5110 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533092 5110 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533097 5110 flags.go:64] FLAG: --feature-gates="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533104 5110 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533108 5110 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533113 5110 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533119 5110 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533123 5110 flags.go:64] FLAG: --healthz-port="10248" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533128 5110 flags.go:64] FLAG: --help="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533133 5110 flags.go:64] FLAG: --hostname-override="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533138 5110 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533143 5110 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533148 5110 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533153 5110 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533158 5110 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533163 5110 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533167 5110 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533172 5110 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533180 5110 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533185 5110 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533190 5110 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533195 5110 flags.go:64] FLAG: --kube-reserved="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533200 5110 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533205 5110 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533211 5110 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533216 5110 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533221 5110 flags.go:64] FLAG: --lock-file="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533226 5110 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533231 5110 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533236 5110 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533243 5110 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533248 5110 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533253 5110 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533258 5110 flags.go:64] FLAG: --logging-format="text" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533263 5110 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533268 5110 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533273 5110 flags.go:64] FLAG: --manifest-url="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533278 5110 flags.go:64] FLAG: --manifest-url-header="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533285 5110 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533290 5110 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533297 5110 flags.go:64] FLAG: --max-pods="110" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533302 5110 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533307 5110 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533313 5110 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533318 5110 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533323 5110 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533348 5110 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533356 5110 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533372 5110 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533377 5110 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533382 5110 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533388 5110 flags.go:64] FLAG: --pod-cidr="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533393 5110 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533401 5110 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533406 5110 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533411 5110 flags.go:64] FLAG: --pods-per-core="0" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533416 5110 flags.go:64] FLAG: --port="10250" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533421 5110 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533427 5110 flags.go:64] FLAG: --provider-id="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533431 5110 flags.go:64] FLAG: --qos-reserved="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533436 5110 flags.go:64] FLAG: --read-only-port="10255" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533441 5110 flags.go:64] FLAG: --register-node="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533446 5110 flags.go:64] FLAG: --register-schedulable="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533452 5110 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533460 5110 flags.go:64] FLAG: --registry-burst="10" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533465 5110 flags.go:64] FLAG: --registry-qps="5" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533470 5110 flags.go:64] FLAG: --reserved-cpus="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533474 5110 flags.go:64] FLAG: --reserved-memory="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533481 5110 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533486 5110 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533491 5110 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533496 5110 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533501 5110 flags.go:64] FLAG: --runonce="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533506 5110 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533511 5110 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533517 5110 flags.go:64] FLAG: --seccomp-default="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533522 5110 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533527 5110 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533532 5110 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533537 5110 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533543 5110 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533547 5110 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533552 5110 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533557 5110 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533563 5110 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533568 5110 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533573 5110 flags.go:64] FLAG: --system-cgroups="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533578 5110 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533590 5110 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533595 5110 flags.go:64] FLAG: --tls-cert-file="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533600 5110 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533607 5110 flags.go:64] FLAG: --tls-min-version="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533613 5110 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533617 5110 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533622 5110 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533627 5110 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533632 5110 flags.go:64] FLAG: --v="2" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533665 5110 flags.go:64] FLAG: --version="false" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533671 5110 flags.go:64] FLAG: --vmodule="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533678 5110 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.533683 5110 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533797 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533803 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533808 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533814 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533820 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533825 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533943 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533948 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533952 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533957 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533962 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533967 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533971 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533975 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533980 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533984 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533989 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533993 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.533998 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534004 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534009 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534013 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534018 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534022 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534027 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534031 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534036 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534040 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534044 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534049 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534053 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534058 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534062 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534066 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534071 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534075 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534079 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534083 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534090 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534094 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534099 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534104 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534108 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534112 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534117 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534121 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534126 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534131 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534135 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534140 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534145 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534152 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534158 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534164 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534168 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534173 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534178 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534183 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534188 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534193 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534197 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534202 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534207 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534211 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534216 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534220 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534226 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534231 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534237 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534242 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534251 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534256 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534262 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534267 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534273 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534278 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534282 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534287 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534291 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534296 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534301 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534305 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534312 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534320 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534325 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.534352 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.535221 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.550884 5110 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.550926 5110 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551034 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551047 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551056 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551064 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551129 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551139 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551147 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551154 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551164 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551175 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551183 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551190 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551198 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551205 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551213 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551220 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551227 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551235 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551242 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551249 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551256 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551264 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551271 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551278 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551285 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551295 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551302 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551309 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551316 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551323 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551367 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551378 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551389 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551398 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551408 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551418 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551454 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551464 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551472 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551479 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551486 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551493 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551500 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551507 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551514 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551521 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551529 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551536 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551543 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551549 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551557 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551565 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551572 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551578 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551586 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551592 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551600 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551606 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551616 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551623 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551631 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551638 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551646 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551653 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551661 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551668 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551676 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551684 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551693 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551701 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551710 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551719 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551728 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551735 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551742 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551749 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551759 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551769 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551777 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551786 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551794 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551802 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551810 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551817 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551825 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.551832 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.551848 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552106 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552122 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552132 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552141 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552153 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552163 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552172 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552181 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552189 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552197 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552205 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552212 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552219 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552226 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552234 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552241 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552248 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552255 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552262 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552269 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552276 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552284 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552291 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552298 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552305 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552312 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552319 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552326 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552365 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552372 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552379 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552387 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552397 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552407 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552414 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552421 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552429 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552438 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552445 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552452 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552459 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552467 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552510 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552520 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552529 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552537 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552545 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552553 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552561 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552569 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552577 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552584 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552592 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552600 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552607 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552614 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552620 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552628 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552635 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552642 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552650 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552657 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552664 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552671 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552680 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552687 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552694 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552701 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552709 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552716 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552732 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552740 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552747 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552754 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552764 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552772 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552780 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552788 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552796 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552803 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552810 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552817 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552825 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552832 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552839 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.552846 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.552858 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.553977 5110 server.go:962] "Client rotation is on, will bootstrap in background" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.559811 5110 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.564233 5110 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.564417 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.566011 5110 server.go:1019] "Starting client certificate rotation" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.566180 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.566239 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.610850 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.615286 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.616068 5110 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.641687 5110 log.go:25] "Validated CRI v1 runtime API" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.698936 5110 log.go:25] "Validated CRI v1 image API" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.701071 5110 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.708197 5110 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-30-00-05-06-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.708234 5110 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.724912 5110 manager.go:217] Machine: {Timestamp:2026-01-30 00:12:18.722075422 +0000 UTC m=+0.680311561 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4 BootID:6f599707-daad-4f90-b3eb-35dae3554a65 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:95:22:9d Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:95:22:9d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a2:d6:99 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2c:49:17 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:73:ca:21 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:13:3c:78 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:51:ef:e3:d8:66 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:b2:57:ff:2b:53 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.725217 5110 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.725471 5110 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.729012 5110 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.729068 5110 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.729285 5110 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.729322 5110 container_manager_linux.go:306] "Creating device plugin manager" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.729436 5110 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.730231 5110 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.731127 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.731305 5110 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.734041 5110 kubelet.go:491] "Attempting to sync node with API server" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.734071 5110 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.734088 5110 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.734103 5110 kubelet.go:397] "Adding apiserver pod source" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.734124 5110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.737106 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.737129 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.738950 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.739007 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.740002 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.740026 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.745095 5110 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.746545 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.747708 5110 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.749569 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.749785 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.749912 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750015 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750128 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750242 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750404 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750542 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750675 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750792 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.750899 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.751490 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.754831 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.754888 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.756842 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.779381 5110 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.779466 5110 server.go:1295] "Started kubelet" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.779740 5110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.779885 5110 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.779864 5110 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.780575 5110 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 00:12:18 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.782030 5110 server.go:317] "Adding debug handlers to kubelet server" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.783184 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.784665 5110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.785404 5110 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.785429 5110 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.785602 5110 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.785639 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.785686 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.785711 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.785682 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59d2669c3702 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,LastTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.787594 5110 factory.go:55] Registering systemd factory Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.787682 5110 factory.go:223] Registration of the systemd container factory successfully Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.788015 5110 factory.go:153] Registering CRI-O factory Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.788044 5110 factory.go:223] Registration of the crio container factory successfully Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.788119 5110 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.788156 5110 factory.go:103] Registering Raw factory Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.788172 5110 manager.go:1196] Started watching for new ooms in manager Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.789200 5110 manager.go:319] Starting recovery of all containers Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.821093 5110 manager.go:324] Recovery completed Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.825873 5110 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-clusterid.service": readdirent /sys/fs/cgroup/system.slice/ocp-clusterid.service: no such file or directory Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.826422 5110 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": readdirent /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Jan 30 00:12:18 crc kubenswrapper[5110]: W0130 00:12:18.837776 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-wait-apiservices-available.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-wait-apiservices-available.service: no such file or directory Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.842462 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.843863 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.843920 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.843930 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.845069 5110 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.846137 5110 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.846166 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.861115 5110 policy_none.go:49] "None policy: Start" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.861146 5110 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.861159 5110 state_mem.go:35] "Initializing new in-memory state store" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863885 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863933 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863947 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863960 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863973 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863985 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.863997 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864009 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864024 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864035 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864046 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864058 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864068 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864078 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864093 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864104 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864114 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864125 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864135 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864145 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864156 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864168 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864180 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864192 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864203 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864215 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864226 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864238 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864253 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864265 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864277 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864289 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864325 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864356 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864367 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864380 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864391 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864403 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.864414 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866509 5110 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866545 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866560 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866575 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866590 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866607 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866624 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866639 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866650 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866661 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866675 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866691 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866704 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866715 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866726 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866738 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866751 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866765 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866786 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866799 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866810 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866821 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866851 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866868 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866884 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866901 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866913 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866928 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866943 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866975 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.866988 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867007 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867020 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867034 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867051 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867063 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867095 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867107 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867117 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867129 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867141 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867152 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867162 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867174 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867185 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867197 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867210 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867221 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867232 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867243 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867259 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867270 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867281 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867293 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867305 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867323 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867357 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867370 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867382 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867394 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867405 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867416 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867427 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867437 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867448 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867458 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867470 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867480 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867491 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867503 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867514 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867526 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867537 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867548 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867574 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867585 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867597 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867607 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867618 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867628 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867638 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867651 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867663 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867674 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867686 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867697 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867708 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867721 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867731 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867742 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867751 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867762 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867773 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867784 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867794 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867804 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867814 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867825 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867835 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867847 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867858 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867868 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867879 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867890 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867901 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867911 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867921 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867933 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867944 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867955 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867965 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.867995 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868006 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868017 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868029 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868040 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868053 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868064 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868075 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868089 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868100 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868111 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868122 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868133 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868144 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868156 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868169 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868179 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868190 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868200 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868212 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868222 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868234 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868244 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868255 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868266 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868277 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868289 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868275 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868300 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868386 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868399 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868411 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868422 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868434 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868445 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868457 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868468 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868480 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868493 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868503 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868513 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868525 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868535 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868546 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868559 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868569 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868579 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868590 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868600 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868613 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868624 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868635 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868646 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868656 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868666 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868678 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868688 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868699 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868711 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868721 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868733 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868744 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868755 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868766 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868776 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868787 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868798 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868808 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868819 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868830 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868841 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868853 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868866 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868883 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868895 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868907 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868944 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868954 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868965 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868978 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868988 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.868998 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869012 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869023 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869033 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869044 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869054 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869064 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869074 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869085 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869096 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869106 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869117 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869128 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869139 5110 reconstruct.go:97] "Volume reconstruction finished" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.869148 5110 reconciler.go:26] "Reconciler: start to sync state" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.870891 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.870954 5110 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.870994 5110 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.871008 5110 kubelet.go:2451] "Starting kubelet main sync loop" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.871161 5110 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.873683 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.886378 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.919638 5110 manager.go:341] "Starting Device Plugin manager" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.920215 5110 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.920246 5110 server.go:85] "Starting device plugin registration server" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.920880 5110 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.920909 5110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.921453 5110 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.921616 5110 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.921629 5110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.927869 5110 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.927922 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.971760 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.971973 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.973347 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.973380 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.973391 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.974232 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.975100 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.975131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.975146 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.975484 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.975531 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.977530 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.977559 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.977571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978245 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978347 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978380 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978957 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978980 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.978992 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979024 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979074 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979089 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979701 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979744 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.979766 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980410 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980418 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980537 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980554 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.980562 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981023 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981179 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981215 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981540 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981603 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981706 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981722 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.981731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.982421 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.982524 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.982966 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.983069 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:18 crc kubenswrapper[5110]: I0130 00:12:18.983153 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:18 crc kubenswrapper[5110]: E0130 00:12:18.986496 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.007494 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.021730 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.022632 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.022683 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.022703 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.022740 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.023383 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.030907 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.038186 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.061634 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.068450 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.071820 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.071996 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072148 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072284 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072456 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072969 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.073288 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.073448 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072917 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.072869 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.073766 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.073903 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074106 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074164 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074213 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074256 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074353 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074400 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074431 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074471 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074491 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074511 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074535 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074564 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074583 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.074604 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.075030 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.075672 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.075727 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.075791 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176225 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176303 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176449 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176509 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176446 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176655 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176708 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176741 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176748 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176776 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176806 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176807 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176855 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176881 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176900 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176916 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176954 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.176994 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177004 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177034 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177066 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177102 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177121 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177134 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177188 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177209 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177225 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177260 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177307 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177384 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177434 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.177485 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.224202 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.225602 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.225662 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.225682 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.225718 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.226507 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.309269 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.332053 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.338850 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: W0130 00:12:19.354123 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-4e4be17e33b6ac0b8c8b151be34176324e687fce192c29ae0f34292ea652618a WatchSource:0}: Error finding container 4e4be17e33b6ac0b8c8b151be34176324e687fce192c29ae0f34292ea652618a: Status 404 returned error can't find the container with id 4e4be17e33b6ac0b8c8b151be34176324e687fce192c29ae0f34292ea652618a Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.361233 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.362465 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.369956 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:19 crc kubenswrapper[5110]: W0130 00:12:19.385937 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-f93962c4914d87214a9e64367a4a4e9cd9daa6ff319ad8152d26e610557672af WatchSource:0}: Error finding container f93962c4914d87214a9e64367a4a4e9cd9daa6ff319ad8152d26e610557672af: Status 404 returned error can't find the container with id f93962c4914d87214a9e64367a4a4e9cd9daa6ff319ad8152d26e610557672af Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.387291 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Jan 30 00:12:19 crc kubenswrapper[5110]: W0130 00:12:19.388242 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-77afd7c203be3e13b23a2b28e05f5f42f3686b76ea4316662475604e115479f4 WatchSource:0}: Error finding container 77afd7c203be3e13b23a2b28e05f5f42f3686b76ea4316662475604e115479f4: Status 404 returned error can't find the container with id 77afd7c203be3e13b23a2b28e05f5f42f3686b76ea4316662475604e115479f4 Jan 30 00:12:19 crc kubenswrapper[5110]: W0130 00:12:19.397955 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-ef9865d412db7f98181c6385c6fa5139b7cce4c74e57e459d15f236143e83100 WatchSource:0}: Error finding container ef9865d412db7f98181c6385c6fa5139b7cce4c74e57e459d15f236143e83100: Status 404 returned error can't find the container with id ef9865d412db7f98181c6385c6fa5139b7cce4c74e57e459d15f236143e83100 Jan 30 00:12:19 crc kubenswrapper[5110]: W0130 00:12:19.400276 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-1d6ff46380dd51a42af5436fcafdba7e6578fc12842fe2fa8ed42d6f97a1200b WatchSource:0}: Error finding container 1d6ff46380dd51a42af5436fcafdba7e6578fc12842fe2fa8ed42d6f97a1200b: Status 404 returned error can't find the container with id 1d6ff46380dd51a42af5436fcafdba7e6578fc12842fe2fa8ed42d6f97a1200b Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.627232 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.629034 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.629092 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.629112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.629153 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.629716 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.758050 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.816614 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.821425 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:12:19 crc kubenswrapper[5110]: E0130 00:12:19.868476 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.876275 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"77afd7c203be3e13b23a2b28e05f5f42f3686b76ea4316662475604e115479f4"} Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.878507 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"f93962c4914d87214a9e64367a4a4e9cd9daa6ff319ad8152d26e610557672af"} Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.880252 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4e4be17e33b6ac0b8c8b151be34176324e687fce192c29ae0f34292ea652618a"} Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.881445 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1d6ff46380dd51a42af5436fcafdba7e6578fc12842fe2fa8ed42d6f97a1200b"} Jan 30 00:12:19 crc kubenswrapper[5110]: I0130 00:12:19.882810 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ef9865d412db7f98181c6385c6fa5139b7cce4c74e57e459d15f236143e83100"} Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.188707 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.197132 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.430263 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.431319 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.431424 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.431454 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.431498 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.432178 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.697881 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.700802 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.757896 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.888660 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.888753 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f"} Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.888925 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.889978 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.890047 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.890077 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.890539 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.892220 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.892342 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f"} Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.892536 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893022 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893422 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893489 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893543 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893560 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893568 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.893603 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.893970 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.896612 5110 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.896720 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.896739 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98"} Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.897424 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.897496 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.897521 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.897800 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.900476 5110 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.900579 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85"} Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.900721 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.901918 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.901999 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.902027 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:20 crc kubenswrapper[5110]: E0130 00:12:20.902466 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.904044 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78"} Jan 30 00:12:20 crc kubenswrapper[5110]: I0130 00:12:20.904121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.758834 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Jan 30 00:12:21 crc kubenswrapper[5110]: E0130 00:12:21.789526 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.911606 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.911657 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.911670 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.911814 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.913013 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.913051 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.913062 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:21 crc kubenswrapper[5110]: E0130 00:12:21.913288 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.915320 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.915396 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.915400 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.916453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.916508 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.916522 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:21 crc kubenswrapper[5110]: E0130 00:12:21.916800 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.918797 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.918838 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.918849 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.918861 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.920535 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687" exitCode=0 Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.920600 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.920736 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.921715 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.921747 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.921758 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:21 crc kubenswrapper[5110]: E0130 00:12:21.921938 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.923578 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad"} Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.923730 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.924375 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.924402 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:21 crc kubenswrapper[5110]: I0130 00:12:21.924413 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:21 crc kubenswrapper[5110]: E0130 00:12:21.924592 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.032759 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.034867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.034921 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.034935 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.034964 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.035452 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.250207 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.440805 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.930504 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a644d096ce5253217f82c045d5d3652a6dc912b54f9e45b4f68fe5552ca12271"} Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.930711 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.931499 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.931549 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.931567 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.931890 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.933608 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1" exitCode=0 Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.933792 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.933832 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.933838 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934014 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1"} Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934093 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934418 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934587 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934638 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934799 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934839 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.934857 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.935099 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.935433 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.935684 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.935720 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.935737 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.936003 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.936087 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.936123 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:22 crc kubenswrapper[5110]: I0130 00:12:22.936140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:22 crc kubenswrapper[5110]: E0130 00:12:22.936574 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.942047 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9"} Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.942107 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac"} Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.942124 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7"} Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.942282 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.942409 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.944012 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.944083 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:23 crc kubenswrapper[5110]: I0130 00:12:23.944105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:23 crc kubenswrapper[5110]: E0130 00:12:23.944880 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.874967 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.942662 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.953621 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251"} Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.953699 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a"} Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.953764 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.953833 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.953902 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.954859 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.954952 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.954976 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.954976 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.955022 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:24 crc kubenswrapper[5110]: I0130 00:12:24.955040 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:24 crc kubenswrapper[5110]: E0130 00:12:24.955725 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:24 crc kubenswrapper[5110]: E0130 00:12:24.955858 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.236213 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.237698 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.237801 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.237825 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.237884 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.956867 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.958021 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.958112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:25 crc kubenswrapper[5110]: I0130 00:12:25.958134 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:25 crc kubenswrapper[5110]: E0130 00:12:25.959092 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.355098 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.355535 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.355599 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.357104 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.357200 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:26 crc kubenswrapper[5110]: I0130 00:12:26.357221 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:26 crc kubenswrapper[5110]: E0130 00:12:26.357999 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.168812 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.169172 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.170626 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.170686 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.170743 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:27 crc kubenswrapper[5110]: E0130 00:12:27.171290 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.975486 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.975721 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.976701 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.976737 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:27 crc kubenswrapper[5110]: I0130 00:12:27.976748 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:27 crc kubenswrapper[5110]: E0130 00:12:27.977030 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.100266 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.100816 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.102426 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.102531 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.102562 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:28 crc kubenswrapper[5110]: E0130 00:12:28.103582 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.119006 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.142785 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.154749 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.326323 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 30 00:12:28 crc kubenswrapper[5110]: E0130 00:12:28.928248 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.972617 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.972934 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.974222 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.974328 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.974414 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.975532 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.975583 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:28 crc kubenswrapper[5110]: I0130 00:12:28.975595 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:28 crc kubenswrapper[5110]: E0130 00:12:28.975993 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:28 crc kubenswrapper[5110]: E0130 00:12:28.976035 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.296939 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.297301 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.298778 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.298854 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.298876 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:29 crc kubenswrapper[5110]: E0130 00:12:29.299612 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.975805 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.977262 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.977377 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:29 crc kubenswrapper[5110]: I0130 00:12:29.977400 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:29 crc kubenswrapper[5110]: E0130 00:12:29.978081 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:30 crc kubenswrapper[5110]: I0130 00:12:30.976501 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:12:30 crc kubenswrapper[5110]: I0130 00:12:30.976637 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.650910 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.651227 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.652756 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.653279 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.653463 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:31 crc kubenswrapper[5110]: E0130 00:12:31.654309 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.659852 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.983233 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.984378 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.984463 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:31 crc kubenswrapper[5110]: I0130 00:12:31.984484 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:31 crc kubenswrapper[5110]: E0130 00:12:31.985055 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:32 crc kubenswrapper[5110]: I0130 00:12:32.467212 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5110]: I0130 00:12:32.467301 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5110]: I0130 00:12:32.670575 5110 trace.go:236] Trace[1244527213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:12:22.668) (total time: 10001ms): Jan 30 00:12:32 crc kubenswrapper[5110]: Trace[1244527213]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:32.670) Jan 30 00:12:32 crc kubenswrapper[5110]: Trace[1244527213]: [10.001897827s] [10.001897827s] END Jan 30 00:12:32 crc kubenswrapper[5110]: E0130 00:12:32.670656 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:12:32 crc kubenswrapper[5110]: I0130 00:12:32.759897 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 00:12:32 crc kubenswrapper[5110]: I0130 00:12:32.903436 5110 trace.go:236] Trace[111457979]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:12:22.901) (total time: 10001ms): Jan 30 00:12:32 crc kubenswrapper[5110]: Trace[111457979]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:32.903) Jan 30 00:12:32 crc kubenswrapper[5110]: Trace[111457979]: [10.00158455s] [10.00158455s] END Jan 30 00:12:32 crc kubenswrapper[5110]: E0130 00:12:32.903495 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:12:33 crc kubenswrapper[5110]: E0130 00:12:33.072968 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188f59d2669c3702 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,LastTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:33 crc kubenswrapper[5110]: I0130 00:12:33.769501 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:12:33 crc kubenswrapper[5110]: I0130 00:12:33.769599 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:12:33 crc kubenswrapper[5110]: I0130 00:12:33.798554 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:12:33 crc kubenswrapper[5110]: I0130 00:12:33.798683 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:12:34 crc kubenswrapper[5110]: E0130 00:12:34.990178 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.364467 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.364822 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.366269 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.366401 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.366428 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:36 crc kubenswrapper[5110]: E0130 00:12:36.367204 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.372717 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:36 crc kubenswrapper[5110]: I0130 00:12:36.999000 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:37 crc kubenswrapper[5110]: I0130 00:12:37.000490 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:37 crc kubenswrapper[5110]: I0130 00:12:37.000564 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:37 crc kubenswrapper[5110]: I0130 00:12:37.000585 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:37 crc kubenswrapper[5110]: E0130 00:12:37.001883 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:37 crc kubenswrapper[5110]: E0130 00:12:37.101865 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:12:37 crc kubenswrapper[5110]: E0130 00:12:37.690761 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.147869 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.148475 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.149744 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.149817 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.149848 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:38 crc kubenswrapper[5110]: E0130 00:12:38.150664 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.171968 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.777853 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.777892 5110 trace.go:236] Trace[1581588792]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:12:28.426) (total time: 10351ms): Jan 30 00:12:38 crc kubenswrapper[5110]: Trace[1581588792]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 10351ms (00:12:38.777) Jan 30 00:12:38 crc kubenswrapper[5110]: Trace[1581588792]: [10.351665139s] [10.351665139s] END Jan 30 00:12:38 crc kubenswrapper[5110]: E0130 00:12:38.777950 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.777892 5110 trace.go:236] Trace[1124034514]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:12:27.528) (total time: 11249ms): Jan 30 00:12:38 crc kubenswrapper[5110]: Trace[1124034514]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 11249ms (00:12:38.777) Jan 30 00:12:38 crc kubenswrapper[5110]: Trace[1124034514]: [11.249459597s] [11.249459597s] END Jan 30 00:12:38 crc kubenswrapper[5110]: E0130 00:12:38.778011 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:12:38 crc kubenswrapper[5110]: E0130 00:12:38.780076 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.800130 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.837980 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49628->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.838122 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49628->192.168.126.11:17697: read: connection reset by peer" Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.838595 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 00:12:38 crc kubenswrapper[5110]: I0130 00:12:38.838658 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 00:12:38 crc kubenswrapper[5110]: E0130 00:12:38.928769 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.006320 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.008109 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a644d096ce5253217f82c045d5d3652a6dc912b54f9e45b4f68fe5552ca12271" exitCode=255 Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.008488 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.008484 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"a644d096ce5253217f82c045d5d3652a6dc912b54f9e45b4f68fe5552ca12271"} Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.008784 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.009384 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.009466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.009500 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.009895 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.010027 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.010069 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:39 crc kubenswrapper[5110]: E0130 00:12:39.010598 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:39 crc kubenswrapper[5110]: E0130 00:12:39.011048 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.011556 5110 scope.go:117] "RemoveContainer" containerID="a644d096ce5253217f82c045d5d3652a6dc912b54f9e45b4f68fe5552ca12271" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.493519 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.493866 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.495458 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.495521 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.495541 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:39 crc kubenswrapper[5110]: E0130 00:12:39.496018 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.500176 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:39 crc kubenswrapper[5110]: I0130 00:12:39.762885 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.012934 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.016178 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.016396 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf"} Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.016684 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.017077 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.017118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.017132 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:40 crc kubenswrapper[5110]: E0130 00:12:40.017486 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.018315 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.018368 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.018379 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:40 crc kubenswrapper[5110]: E0130 00:12:40.018711 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:40 crc kubenswrapper[5110]: I0130 00:12:40.764996 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.020606 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.021109 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.022883 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" exitCode=255 Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.022984 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf"} Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.023084 5110 scope.go:117] "RemoveContainer" containerID="a644d096ce5253217f82c045d5d3652a6dc912b54f9e45b4f68fe5552ca12271" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.023296 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.023877 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.023920 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.023931 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:41 crc kubenswrapper[5110]: E0130 00:12:41.024275 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.024613 5110 scope.go:117] "RemoveContainer" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" Jan 30 00:12:41 crc kubenswrapper[5110]: E0130 00:12:41.024849 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:41 crc kubenswrapper[5110]: E0130 00:12:41.399685 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:12:41 crc kubenswrapper[5110]: I0130 00:12:41.763240 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.029210 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.467512 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.467759 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.468562 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.468601 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.468616 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:42 crc kubenswrapper[5110]: E0130 00:12:42.468936 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.469266 5110 scope.go:117] "RemoveContainer" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" Jan 30 00:12:42 crc kubenswrapper[5110]: E0130 00:12:42.469499 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:42 crc kubenswrapper[5110]: I0130 00:12:42.764479 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.082992 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d2669c3702 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,LastTimestamp:2026-01-30 00:12:18.779412226 +0000 UTC m=+0.737648375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.091235 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.104170 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.111652 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.120128 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26f63acb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.926701744 +0000 UTC m=+0.884937883,LastTimestamp:2026-01-30 00:12:18.926701744 +0000 UTC m=+0.884937883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.128086 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.973364311 +0000 UTC m=+0.931600440,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.134245 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.973384782 +0000 UTC m=+0.931620911,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.139794 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.973395702 +0000 UTC m=+0.931631831,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.146265 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.975122059 +0000 UTC m=+0.933358188,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.152400 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.975139459 +0000 UTC m=+0.933375588,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.158478 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.975152389 +0000 UTC m=+0.933388518,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.164569 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.97754978 +0000 UTC m=+0.935785909,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.169413 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.97756613 +0000 UTC m=+0.935802259,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.174323 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.977582211 +0000 UTC m=+0.935818340,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.179178 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.97897131 +0000 UTC m=+0.937207439,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.185116 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.97898614 +0000 UTC m=+0.937222269,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.189132 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.978997601 +0000 UTC m=+0.937233730,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.193069 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.979048122 +0000 UTC m=+0.937284261,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.196854 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.979082172 +0000 UTC m=+0.937318311,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.200602 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.979095913 +0000 UTC m=+0.937332052,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.204307 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.980450511 +0000 UTC m=+0.938686660,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.208393 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a7447ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a7447ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843903981 +0000 UTC m=+0.802140110,LastTimestamp:2026-01-30 00:12:18.980479502 +0000 UTC m=+0.938715641,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.211963 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.980545553 +0000 UTC m=+0.938781682,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.216581 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a749ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a749ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843925412 +0000 UTC m=+0.802161541,LastTimestamp:2026-01-30 00:12:18.980556184 +0000 UTC m=+0.938792323,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.220388 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59d26a74c21e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59d26a74c21e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:18.843935262 +0000 UTC m=+0.802171391,LastTimestamp:2026-01-30 00:12:18.980559894 +0000 UTC m=+0.938796023,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.225867 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2894f17dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:19.361560541 +0000 UTC m=+1.319796660,LastTimestamp:2026-01-30 00:12:19.361560541 +0000 UTC m=+1.319796660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.229926 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d28af3bdeb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:19.389128171 +0000 UTC m=+1.347364330,LastTimestamp:2026-01-30 00:12:19.389128171 +0000 UTC m=+1.347364330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.235887 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d28b0d85d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:19.390817748 +0000 UTC m=+1.349053877,LastTimestamp:2026-01-30 00:12:19.390817748 +0000 UTC m=+1.349053877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.240152 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d28b8dedf8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:19.399233016 +0000 UTC m=+1.357469135,LastTimestamp:2026-01-30 00:12:19.399233016 +0000 UTC m=+1.357469135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.244758 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d28c1fdb7e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:19.408796542 +0000 UTC m=+1.367032671,LastTimestamp:2026-01-30 00:12:19.408796542 +0000 UTC m=+1.367032671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.251427 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2b4217d3c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.079992124 +0000 UTC m=+2.038228253,LastTimestamp:2026-01-30 00:12:20.079992124 +0000 UTC m=+2.038228253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.256166 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d2b443ca15 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.082240021 +0000 UTC m=+2.040476150,LastTimestamp:2026-01-30 00:12:20.082240021 +0000 UTC m=+2.040476150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.262389 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2b44683b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.08241861 +0000 UTC m=+2.040654739,LastTimestamp:2026-01-30 00:12:20.08241861 +0000 UTC m=+2.040654739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.267512 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2b47d21b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.085998005 +0000 UTC m=+2.044234144,LastTimestamp:2026-01-30 00:12:20.085998005 +0000 UTC m=+2.044234144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.277901 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d2b48ba1e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.086948324 +0000 UTC m=+2.045184463,LastTimestamp:2026-01-30 00:12:20.086948324 +0000 UTC m=+2.045184463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.283171 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2b512ea93 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.095814291 +0000 UTC m=+2.054050430,LastTimestamp:2026-01-30 00:12:20.095814291 +0000 UTC m=+2.054050430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.288750 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2b5140d79 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.095888761 +0000 UTC m=+2.054124930,LastTimestamp:2026-01-30 00:12:20.095888761 +0000 UTC m=+2.054124930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.292305 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2b5310192 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.097786258 +0000 UTC m=+2.056022387,LastTimestamp:2026-01-30 00:12:20.097786258 +0000 UTC m=+2.056022387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.294999 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d2b56bcca4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.101639332 +0000 UTC m=+2.059875471,LastTimestamp:2026-01-30 00:12:20.101639332 +0000 UTC m=+2.059875471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.298714 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2b5a05479 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.105081977 +0000 UTC m=+2.063318116,LastTimestamp:2026-01-30 00:12:20.105081977 +0000 UTC m=+2.063318116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.304043 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d2b5a077b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.105090997 +0000 UTC m=+2.063327166,LastTimestamp:2026-01-30 00:12:20.105090997 +0000 UTC m=+2.063327166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.311683 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2c7b529b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.408437171 +0000 UTC m=+2.366673300,LastTimestamp:2026-01-30 00:12:20.408437171 +0000 UTC m=+2.366673300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.316912 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2c8ae489b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.424763547 +0000 UTC m=+2.382999676,LastTimestamp:2026-01-30 00:12:20.424763547 +0000 UTC m=+2.382999676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.324471 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2c8c47e7f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.426219135 +0000 UTC m=+2.384455294,LastTimestamp:2026-01-30 00:12:20.426219135 +0000 UTC m=+2.384455294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.332052 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2e4937a5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.892768859 +0000 UTC m=+2.851004988,LastTimestamp:2026-01-30 00:12:20.892768859 +0000 UTC m=+2.851004988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.338090 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d2e4c1d72a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.895807274 +0000 UTC m=+2.854043403,LastTimestamp:2026-01-30 00:12:20.895807274 +0000 UTC m=+2.854043403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.343807 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2e5483192 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.904612242 +0000 UTC m=+2.862848411,LastTimestamp:2026-01-30 00:12:20.904612242 +0000 UTC m=+2.862848411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.348892 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d2e54a5f7f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:20.904755071 +0000 UTC m=+2.862991230,LastTimestamp:2026-01-30 00:12:20.904755071 +0000 UTC m=+2.862991230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.355224 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2eed41fae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.064777646 +0000 UTC m=+3.023013765,LastTimestamp:2026-01-30 00:12:21.064777646 +0000 UTC m=+3.023013765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.360042 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2f100f032 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.101269042 +0000 UTC m=+3.059505171,LastTimestamp:2026-01-30 00:12:21.101269042 +0000 UTC m=+3.059505171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.366960 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2f117bb58 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.10276284 +0000 UTC m=+3.060998969,LastTimestamp:2026-01-30 00:12:21.10276284 +0000 UTC m=+3.060998969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.371668 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2f4097534 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.152159028 +0000 UTC m=+3.110395157,LastTimestamp:2026-01-30 00:12:21.152159028 +0000 UTC m=+3.110395157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.376376 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2f41768e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.153073376 +0000 UTC m=+3.111309505,LastTimestamp:2026-01-30 00:12:21.153073376 +0000 UTC m=+3.111309505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.382019 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d2f4200355 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.153637205 +0000 UTC m=+3.111873334,LastTimestamp:2026-01-30 00:12:21.153637205 +0000 UTC m=+3.111873334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.386695 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2f610e9f1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.186202097 +0000 UTC m=+3.144438226,LastTimestamp:2026-01-30 00:12:21.186202097 +0000 UTC m=+3.144438226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.392546 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2f618a6cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.186709197 +0000 UTC m=+3.144945326,LastTimestamp:2026-01-30 00:12:21.186709197 +0000 UTC m=+3.144945326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.395785 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d2f6252d84 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.187530116 +0000 UTC m=+3.145766245,LastTimestamp:2026-01-30 00:12:21.187530116 +0000 UTC m=+3.145766245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.398099 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d2f62cdc09 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.188033545 +0000 UTC m=+3.146269674,LastTimestamp:2026-01-30 00:12:21.188033545 +0000 UTC m=+3.146269674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.400970 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d2f6485580 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.189834112 +0000 UTC m=+3.148070241,LastTimestamp:2026-01-30 00:12:21.189834112 +0000 UTC m=+3.148070241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.403052 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d2fb10ed4f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.270089039 +0000 UTC m=+3.228325168,LastTimestamp:2026-01-30 00:12:21.270089039 +0000 UTC m=+3.228325168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.407255 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d2fcbcbd70 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.298126192 +0000 UTC m=+3.256362321,LastTimestamp:2026-01-30 00:12:21.298126192 +0000 UTC m=+3.256362321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.412292 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59d2fcd7c911 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.299898641 +0000 UTC m=+3.258134770,LastTimestamp:2026-01-30 00:12:21.299898641 +0000 UTC m=+3.258134770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.417425 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d300f5e075 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.368979573 +0000 UTC m=+3.327215702,LastTimestamp:2026-01-30 00:12:21.368979573 +0000 UTC m=+3.327215702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.422744 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d3024abc0b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.391318027 +0000 UTC m=+3.349554166,LastTimestamp:2026-01-30 00:12:21.391318027 +0000 UTC m=+3.349554166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.428055 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d30265cf72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.393092466 +0000 UTC m=+3.351328595,LastTimestamp:2026-01-30 00:12:21.393092466 +0000 UTC m=+3.351328595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.432904 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d303523544 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.408585028 +0000 UTC m=+3.366821157,LastTimestamp:2026-01-30 00:12:21.408585028 +0000 UTC m=+3.366821157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.438729 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d30364a1d6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.40979247 +0000 UTC m=+3.368028609,LastTimestamp:2026-01-30 00:12:21.40979247 +0000 UTC m=+3.368028609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.443571 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d3038507b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.411915702 +0000 UTC m=+3.370151831,LastTimestamp:2026-01-30 00:12:21.411915702 +0000 UTC m=+3.370151831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.448993 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d303a0f390 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.413745552 +0000 UTC m=+3.371981681,LastTimestamp:2026-01-30 00:12:21.413745552 +0000 UTC m=+3.371981681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.455787 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31049854f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.626119503 +0000 UTC m=+3.584355622,LastTimestamp:2026-01-30 00:12:21.626119503 +0000 UTC m=+3.584355622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.460678 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d3106e330e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.628523278 +0000 UTC m=+3.586759407,LastTimestamp:2026-01-30 00:12:21.628523278 +0000 UTC m=+3.586759407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.465630 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31128cc80 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.640752256 +0000 UTC m=+3.598988395,LastTimestamp:2026-01-30 00:12:21.640752256 +0000 UTC m=+3.598988395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.470449 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d3113d7882 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.64210701 +0000 UTC m=+3.600343139,LastTimestamp:2026-01-30 00:12:21.64210701 +0000 UTC m=+3.600343139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.477827 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59d3114c7063 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.643087971 +0000 UTC m=+3.601324110,LastTimestamp:2026-01-30 00:12:21.643087971 +0000 UTC m=+3.601324110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.482180 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31dfbfb77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.855918967 +0000 UTC m=+3.814155086,LastTimestamp:2026-01-30 00:12:21.855918967 +0000 UTC m=+3.814155086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.488814 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31f40e806 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.87721319 +0000 UTC m=+3.835449319,LastTimestamp:2026-01-30 00:12:21.87721319 +0000 UTC m=+3.835449319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.496185 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31f50381a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.87821673 +0000 UTC m=+3.836452849,LastTimestamp:2026-01-30 00:12:21.87821673 +0000 UTC m=+3.836452849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.498667 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d321fef179 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.923221881 +0000 UTC m=+3.881458020,LastTimestamp:2026-01-30 00:12:21.923221881 +0000 UTC m=+3.881458020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.504370 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32e462be6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.129216486 +0000 UTC m=+4.087452615,LastTimestamp:2026-01-30 00:12:22.129216486 +0000 UTC m=+4.087452615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.510290 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32f4f81b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.146605497 +0000 UTC m=+4.104841616,LastTimestamp:2026-01-30 00:12:22.146605497 +0000 UTC m=+4.104841616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.515041 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d32fc2beee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.154157806 +0000 UTC m=+4.112393945,LastTimestamp:2026-01-30 00:12:22.154157806 +0000 UTC m=+4.112393945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.523034 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d331270989 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.177507721 +0000 UTC m=+4.135743850,LastTimestamp:2026-01-30 00:12:22.177507721 +0000 UTC m=+4.135743850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.533432 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d35e71e097 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.937387159 +0000 UTC m=+4.895623318,LastTimestamp:2026-01-30 00:12:22.937387159 +0000 UTC m=+4.895623318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.539213 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d36f6f9886 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.22245031 +0000 UTC m=+5.180686439,LastTimestamp:2026-01-30 00:12:23.22245031 +0000 UTC m=+5.180686439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.548428 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d370474216 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.236583958 +0000 UTC m=+5.194820087,LastTimestamp:2026-01-30 00:12:23.236583958 +0000 UTC m=+5.194820087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.553095 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3705b5fac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.237902252 +0000 UTC m=+5.196138381,LastTimestamp:2026-01-30 00:12:23.237902252 +0000 UTC m=+5.196138381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.558604 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3815aa210 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.523066384 +0000 UTC m=+5.481302523,LastTimestamp:2026-01-30 00:12:23.523066384 +0000 UTC m=+5.481302523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.563081 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d38273757b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.541470587 +0000 UTC m=+5.499706756,LastTimestamp:2026-01-30 00:12:23.541470587 +0000 UTC m=+5.499706756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.567727 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3828d9acb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.543184075 +0000 UTC m=+5.501420204,LastTimestamp:2026-01-30 00:12:23.543184075 +0000 UTC m=+5.501420204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.573017 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d392ec5590 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.817827728 +0000 UTC m=+5.776063867,LastTimestamp:2026-01-30 00:12:23.817827728 +0000 UTC m=+5.776063867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.581498 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d39404e73c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.8362151 +0000 UTC m=+5.794451239,LastTimestamp:2026-01-30 00:12:23.8362151 +0000 UTC m=+5.794451239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.588389 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3942133eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:23.838069739 +0000 UTC m=+5.796305878,LastTimestamp:2026-01-30 00:12:23.838069739 +0000 UTC m=+5.796305878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.597166 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3a6466d76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:24.14249919 +0000 UTC m=+6.100735359,LastTimestamp:2026-01-30 00:12:24.14249919 +0000 UTC m=+6.100735359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.601525 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3a775b633 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:24.162375219 +0000 UTC m=+6.120611378,LastTimestamp:2026-01-30 00:12:24.162375219 +0000 UTC m=+6.120611378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.605805 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3a79f4524 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:24.165098788 +0000 UTC m=+6.123334927,LastTimestamp:2026-01-30 00:12:24.165098788 +0000 UTC m=+6.123334927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.610877 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3b88ba170 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:24.449024368 +0000 UTC m=+6.407260527,LastTimestamp:2026-01-30 00:12:24.449024368 +0000 UTC m=+6.407260527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.619771 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59d3b9c6545c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:24.469648476 +0000 UTC m=+6.427884645,LastTimestamp:2026-01-30 00:12:24.469648476 +0000 UTC m=+6.427884645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.628886 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59d53d9e8ca5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 00:12:43 crc kubenswrapper[5110]: body: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:30.976601253 +0000 UTC m=+12.934837412,LastTimestamp:2026-01-30 00:12:30.976601253 +0000 UTC m=+12.934837412,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.633369 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59d53da108fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:30.976764157 +0000 UTC m=+12.935000326,LastTimestamp:2026-01-30 00:12:30.976764157 +0000 UTC m=+12.935000326,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.638811 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188f59d596785bc9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 30 00:12:43 crc kubenswrapper[5110]: body: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:32.467270601 +0000 UTC m=+14.425506740,LastTimestamp:2026-01-30 00:12:32.467270601 +0000 UTC m=+14.425506740,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.643585 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d59679bab1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:32.467360433 +0000 UTC m=+14.425596572,LastTimestamp:2026-01-30 00:12:32.467360433 +0000 UTC m=+14.425596572,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.648410 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188f59d5e417c860 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:12:43 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:12:43 crc kubenswrapper[5110]: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:33.769564256 +0000 UTC m=+15.727800415,LastTimestamp:2026-01-30 00:12:33.769564256 +0000 UTC m=+15.727800415,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.653281 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d5e418c9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:33.769630127 +0000 UTC m=+15.727866286,LastTimestamp:2026-01-30 00:12:33.769630127 +0000 UTC m=+15.727866286,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.659263 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d5e417c860\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188f59d5e417c860 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:12:43 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:12:43 crc kubenswrapper[5110]: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:33.769564256 +0000 UTC m=+15.727800415,LastTimestamp:2026-01-30 00:12:33.79863528 +0000 UTC m=+15.756871439,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.663844 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d5e418c9af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d5e418c9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:33.769630127 +0000 UTC m=+15.727866286,LastTimestamp:2026-01-30 00:12:33.798716652 +0000 UTC m=+15.756952821,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.669453 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188f59d71232cde3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:49628->192.168.126.11:17697: read: connection reset by peer Jan 30 00:12:43 crc kubenswrapper[5110]: body: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:38.838054371 +0000 UTC m=+20.796290530,LastTimestamp:2026-01-30 00:12:38.838054371 +0000 UTC m=+20.796290530,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.676406 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d712347538 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49628->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:38.838162744 +0000 UTC m=+20.796398903,LastTimestamp:2026-01-30 00:12:38.838162744 +0000 UTC m=+20.796398903,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.682242 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:12:43 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188f59d7123ba62a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 30 00:12:43 crc kubenswrapper[5110]: body: Jan 30 00:12:43 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:38.838634026 +0000 UTC m=+20.796870185,LastTimestamp:2026-01-30 00:12:38.838634026 +0000 UTC m=+20.796870185,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:12:43 crc kubenswrapper[5110]: > Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.687960 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d7123c58c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:38.838679747 +0000 UTC m=+20.796915906,LastTimestamp:2026-01-30 00:12:38.838679747 +0000 UTC m=+20.796915906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.696233 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d31f50381a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31f50381a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.87821673 +0000 UTC m=+3.836452849,LastTimestamp:2026-01-30 00:12:39.013530514 +0000 UTC m=+20.971766633,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.702066 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d32e462be6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32e462be6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.129216486 +0000 UTC m=+4.087452615,LastTimestamp:2026-01-30 00:12:39.296083969 +0000 UTC m=+21.254320098,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.706568 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d32f4f81b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32f4f81b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.146605497 +0000 UTC m=+4.104841616,LastTimestamp:2026-01-30 00:12:39.319110088 +0000 UTC m=+21.277346247,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.710922 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: E0130 00:12:43.712799 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:12:42.469468021 +0000 UTC m=+24.427704150,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:43 crc kubenswrapper[5110]: I0130 00:12:43.769187 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:44 crc kubenswrapper[5110]: E0130 00:12:44.491953 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:12:44 crc kubenswrapper[5110]: I0130 00:12:44.766702 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.180205 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.181867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.181962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.181985 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.182032 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:45 crc kubenswrapper[5110]: E0130 00:12:45.197118 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:12:45 crc kubenswrapper[5110]: I0130 00:12:45.764789 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:46 crc kubenswrapper[5110]: E0130 00:12:46.575296 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:12:46 crc kubenswrapper[5110]: I0130 00:12:46.764536 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:47 crc kubenswrapper[5110]: E0130 00:12:47.251722 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:12:47 crc kubenswrapper[5110]: I0130 00:12:47.766059 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:48 crc kubenswrapper[5110]: E0130 00:12:48.408430 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:12:48 crc kubenswrapper[5110]: E0130 00:12:48.649821 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:12:48 crc kubenswrapper[5110]: I0130 00:12:48.765681 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:48 crc kubenswrapper[5110]: E0130 00:12:48.929517 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:12:49 crc kubenswrapper[5110]: I0130 00:12:49.764088 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.017752 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.018192 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.131187 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.131302 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.131369 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:50 crc kubenswrapper[5110]: E0130 00:12:50.132506 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.132966 5110 scope.go:117] "RemoveContainer" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" Jan 30 00:12:50 crc kubenswrapper[5110]: E0130 00:12:50.133423 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:50 crc kubenswrapper[5110]: E0130 00:12:50.141008 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:12:50.133319498 +0000 UTC m=+32.091555667,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:12:50 crc kubenswrapper[5110]: I0130 00:12:50.767067 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:51 crc kubenswrapper[5110]: I0130 00:12:51.765069 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.198130 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.199683 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.199754 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.199778 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.199822 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:52 crc kubenswrapper[5110]: E0130 00:12:52.214892 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:12:52 crc kubenswrapper[5110]: I0130 00:12:52.765528 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:53 crc kubenswrapper[5110]: I0130 00:12:53.764410 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:54 crc kubenswrapper[5110]: I0130 00:12:54.766002 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:55 crc kubenswrapper[5110]: E0130 00:12:55.416970 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:12:55 crc kubenswrapper[5110]: I0130 00:12:55.765942 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:56 crc kubenswrapper[5110]: I0130 00:12:56.765690 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:57 crc kubenswrapper[5110]: I0130 00:12:57.763982 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:58 crc kubenswrapper[5110]: I0130 00:12:58.766433 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:12:58 crc kubenswrapper[5110]: E0130 00:12:58.931154 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.215038 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.216869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.216957 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.216981 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.217033 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:12:59 crc kubenswrapper[5110]: E0130 00:12:59.234327 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:12:59 crc kubenswrapper[5110]: I0130 00:12:59.765237 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.766453 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.872529 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.874022 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.874105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.874126 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:00 crc kubenswrapper[5110]: E0130 00:13:00.874820 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:00 crc kubenswrapper[5110]: I0130 00:13:00.875465 5110 scope.go:117] "RemoveContainer" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" Jan 30 00:13:00 crc kubenswrapper[5110]: E0130 00:13:00.888505 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d31f50381a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31f50381a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.87821673 +0000 UTC m=+3.836452849,LastTimestamp:2026-01-30 00:13:00.877906114 +0000 UTC m=+42.836142273,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:01 crc kubenswrapper[5110]: E0130 00:13:01.158115 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d32e462be6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32e462be6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.129216486 +0000 UTC m=+4.087452615,LastTimestamp:2026-01-30 00:13:01.149861627 +0000 UTC m=+43.108097766,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:01 crc kubenswrapper[5110]: E0130 00:13:01.169234 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d32f4f81b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d32f4f81b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:22.146605497 +0000 UTC m=+4.104841616,LastTimestamp:2026-01-30 00:13:01.166829951 +0000 UTC m=+43.125066080,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:01 crc kubenswrapper[5110]: I0130 00:13:01.170213 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:13:01 crc kubenswrapper[5110]: I0130 00:13:01.174125 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74"} Jan 30 00:13:01 crc kubenswrapper[5110]: I0130 00:13:01.767404 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.181844 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.182896 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.185653 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" exitCode=255 Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.185753 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74"} Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.185813 5110 scope.go:117] "RemoveContainer" containerID="54c27077317eec7b5c98319ec3c9ebd586147636b4b3472d0302be3de00086cf" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.186018 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.187957 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.188021 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.188041 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:02 crc kubenswrapper[5110]: E0130 00:13:02.188577 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.189240 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:02 crc kubenswrapper[5110]: E0130 00:13:02.189708 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:02 crc kubenswrapper[5110]: E0130 00:13:02.199437 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:13:02.189662881 +0000 UTC m=+44.147899040,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:02 crc kubenswrapper[5110]: E0130 00:13:02.425980 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.467396 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:02 crc kubenswrapper[5110]: E0130 00:13:02.469318 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:13:02 crc kubenswrapper[5110]: I0130 00:13:02.765632 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.192642 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.197022 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.198031 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.198131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.198159 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:03 crc kubenswrapper[5110]: E0130 00:13:03.198896 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.199418 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:03 crc kubenswrapper[5110]: E0130 00:13:03.199779 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:03 crc kubenswrapper[5110]: E0130 00:13:03.207885 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:13:03.199718052 +0000 UTC m=+45.157954221,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:03 crc kubenswrapper[5110]: I0130 00:13:03.768299 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.200746 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.201419 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.201466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.201482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:04 crc kubenswrapper[5110]: E0130 00:13:04.202003 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.202304 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:04 crc kubenswrapper[5110]: E0130 00:13:04.202619 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:04 crc kubenswrapper[5110]: E0130 00:13:04.210785 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:13:04.202573199 +0000 UTC m=+46.160809338,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:04 crc kubenswrapper[5110]: I0130 00:13:04.766661 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:05 crc kubenswrapper[5110]: I0130 00:13:05.767799 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.235410 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.236759 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.236971 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.237149 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.237401 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:13:06 crc kubenswrapper[5110]: E0130 00:13:06.254748 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:13:06 crc kubenswrapper[5110]: E0130 00:13:06.455265 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:13:06 crc kubenswrapper[5110]: E0130 00:13:06.695994 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:13:06 crc kubenswrapper[5110]: I0130 00:13:06.767236 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.178104 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.178554 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.179893 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.179969 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.179991 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:07 crc kubenswrapper[5110]: E0130 00:13:07.180648 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:07 crc kubenswrapper[5110]: I0130 00:13:07.766764 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:08 crc kubenswrapper[5110]: I0130 00:13:08.766939 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:08 crc kubenswrapper[5110]: E0130 00:13:08.932307 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:13:09 crc kubenswrapper[5110]: E0130 00:13:09.436679 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:13:09 crc kubenswrapper[5110]: I0130 00:13:09.766517 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.016978 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.018546 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.021121 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.021387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.021571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:10 crc kubenswrapper[5110]: E0130 00:13:10.022317 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.023140 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:10 crc kubenswrapper[5110]: E0130 00:13:10.023699 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:10 crc kubenswrapper[5110]: E0130 00:13:10.032202 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d79489ef0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d79489ef0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:41.024802572 +0000 UTC m=+22.983038701,LastTimestamp:2026-01-30 00:13:10.023639033 +0000 UTC m=+51.981875202,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:10 crc kubenswrapper[5110]: I0130 00:13:10.760194 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:11 crc kubenswrapper[5110]: E0130 00:13:11.410726 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:13:11 crc kubenswrapper[5110]: I0130 00:13:11.765472 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:12 crc kubenswrapper[5110]: I0130 00:13:12.765681 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.254877 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.256991 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.257059 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.257072 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.257104 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:13:13 crc kubenswrapper[5110]: E0130 00:13:13.277559 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:13:13 crc kubenswrapper[5110]: I0130 00:13:13.765939 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:14 crc kubenswrapper[5110]: I0130 00:13:14.766290 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:15 crc kubenswrapper[5110]: I0130 00:13:15.766231 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:16 crc kubenswrapper[5110]: E0130 00:13:16.439697 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:13:16 crc kubenswrapper[5110]: I0130 00:13:16.766174 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:17 crc kubenswrapper[5110]: I0130 00:13:17.765496 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:18 crc kubenswrapper[5110]: I0130 00:13:18.765062 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:18 crc kubenswrapper[5110]: E0130 00:13:18.932941 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:13:19 crc kubenswrapper[5110]: I0130 00:13:19.763092 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.278046 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.280763 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.280848 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.280869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.280910 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:13:20 crc kubenswrapper[5110]: E0130 00:13:20.299227 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:13:20 crc kubenswrapper[5110]: I0130 00:13:20.765083 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:21 crc kubenswrapper[5110]: I0130 00:13:21.765961 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.763527 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.871811 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.872978 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.873032 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.873048 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:22 crc kubenswrapper[5110]: E0130 00:13:22.873479 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:22 crc kubenswrapper[5110]: I0130 00:13:22.873834 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:22 crc kubenswrapper[5110]: E0130 00:13:22.890102 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59d31f50381a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59d31f50381a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:12:21.87821673 +0000 UTC m=+3.836452849,LastTimestamp:2026-01-30 00:13:22.876712847 +0000 UTC m=+64.834948976,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.266072 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.268315 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f"} Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.268647 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.269376 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.269754 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.269841 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:23 crc kubenswrapper[5110]: E0130 00:13:23.270356 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:23 crc kubenswrapper[5110]: E0130 00:13:23.446018 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:13:23 crc kubenswrapper[5110]: I0130 00:13:23.762735 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:13:24 crc kubenswrapper[5110]: I0130 00:13:24.334824 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-ln2p2" Jan 30 00:13:24 crc kubenswrapper[5110]: I0130 00:13:24.343517 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-ln2p2" Jan 30 00:13:24 crc kubenswrapper[5110]: I0130 00:13:24.381591 5110 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 00:13:24 crc kubenswrapper[5110]: I0130 00:13:24.565091 5110 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.276691 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.277372 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.280152 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" exitCode=255 Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.280219 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f"} Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.280268 5110 scope.go:117] "RemoveContainer" containerID="2ecf2c0aa9a0277cacaebbe9a8051864b5b7a4bfec1bd2f26f225239fcc55e74" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.280528 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.281465 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.281502 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.281516 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:25 crc kubenswrapper[5110]: E0130 00:13:25.281930 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.282160 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:13:25 crc kubenswrapper[5110]: E0130 00:13:25.282469 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.345266 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-01 00:08:24 +0000 UTC" deadline="2026-02-22 19:31:38.506202357 +0000 UTC" Jan 30 00:13:25 crc kubenswrapper[5110]: I0130 00:13:25.345357 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="571h18m13.160872453s" Jan 30 00:13:26 crc kubenswrapper[5110]: I0130 00:13:26.286536 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.300701 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.302106 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.302165 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.302181 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.302307 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.315071 5110 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.315432 5110 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.315457 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.320253 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.320313 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.320353 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.320380 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.320396 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:27Z","lastTransitionTime":"2026-01-30T00:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.334430 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.343194 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.343262 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.343276 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.343301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.343321 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:27Z","lastTransitionTime":"2026-01-30T00:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.357074 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.372728 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.372802 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.372816 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.372843 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.372860 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:27Z","lastTransitionTime":"2026-01-30T00:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.385867 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.396786 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.396859 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.396873 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.396897 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:27 crc kubenswrapper[5110]: I0130 00:13:27.396914 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:27Z","lastTransitionTime":"2026-01-30T00:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.409423 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.409592 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.409624 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.510302 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.610950 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.711110 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.812312 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:27 crc kubenswrapper[5110]: E0130 00:13:27.913304 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.014447 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.115178 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.215692 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.316390 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.417121 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.518295 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.619109 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.720194 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.821103 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.922215 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:28 crc kubenswrapper[5110]: E0130 00:13:28.934693 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.022602 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.123797 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.224201 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.325187 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.425935 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.526782 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.627106 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.728137 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.828961 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:29 crc kubenswrapper[5110]: E0130 00:13:29.930105 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.031148 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.132231 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.233424 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.334299 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.435104 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.535221 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.635639 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.736632 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.837670 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:30 crc kubenswrapper[5110]: E0130 00:13:30.938191 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.039224 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.140390 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.241179 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.342098 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.442420 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.543298 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.643543 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.744325 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.845069 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:31 crc kubenswrapper[5110]: E0130 00:13:31.946151 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.046766 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.147160 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.247939 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.348750 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.449099 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.467514 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.467984 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.469383 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.469440 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.469456 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.470064 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:32 crc kubenswrapper[5110]: I0130 00:13:32.470421 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.470715 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.549317 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.649568 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.750643 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.850856 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:32 crc kubenswrapper[5110]: E0130 00:13:32.951322 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.052436 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.153096 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.221043 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.254180 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.269504 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.308036 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.308788 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.308829 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.308844 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.309375 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:33 crc kubenswrapper[5110]: I0130 00:13:33.309701 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.309946 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.355093 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.456101 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.556425 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.657445 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.758108 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.858899 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:33 crc kubenswrapper[5110]: E0130 00:13:33.960136 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.060495 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.161628 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.262654 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.363437 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.463772 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.564842 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.665274 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.765608 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.866780 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:34 crc kubenswrapper[5110]: E0130 00:13:34.966919 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.067544 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.168613 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.268745 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.369835 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.470251 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.570920 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.671516 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.772107 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: I0130 00:13:35.872077 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.872681 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:35 crc kubenswrapper[5110]: I0130 00:13:35.873430 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:35 crc kubenswrapper[5110]: I0130 00:13:35.873516 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:35 crc kubenswrapper[5110]: I0130 00:13:35.873545 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.874363 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:13:35 crc kubenswrapper[5110]: E0130 00:13:35.972757 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.073070 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.173633 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.274571 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.375690 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.476032 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.576854 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.677424 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.777903 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.878228 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:36 crc kubenswrapper[5110]: E0130 00:13:36.978863 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.079104 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.180272 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.281101 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.382582 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.483515 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.584442 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.685675 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.765703 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.771361 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.771413 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.771435 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.771462 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.771481 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:37Z","lastTransitionTime":"2026-01-30T00:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.792161 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.802254 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.802326 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.802387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.802413 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.802466 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:37Z","lastTransitionTime":"2026-01-30T00:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.818939 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.824470 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.824528 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.824547 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.824571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.824589 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:37Z","lastTransitionTime":"2026-01-30T00:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.841414 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.846033 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.846084 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.846103 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.846126 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:37 crc kubenswrapper[5110]: I0130 00:13:37.846144 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:37Z","lastTransitionTime":"2026-01-30T00:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.862636 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.862886 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.862943 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:37 crc kubenswrapper[5110]: E0130 00:13:37.963982 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.064859 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.165623 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.266595 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.332602 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.370055 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.370118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.370133 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.370154 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.370169 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.385657 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.405223 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.472195 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.472252 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.472264 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.472282 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.472295 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.501249 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.574924 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.574981 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.574995 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.575014 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.575028 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.603717 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.677909 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.677958 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.677968 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.677985 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.677995 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.704173 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.772963 5110 apiserver.go:52] "Watching apiserver" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.780758 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.780858 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.780880 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.780906 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.780925 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.784969 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.785939 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-etcd/etcd-crc","openshift-multus/multus-additional-cni-plugins-jf6rt","openshift-multus/network-metrics-daemon-vwf28","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-image-registry/node-ca-kz9zz","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-t6dv6","openshift-multus/multus-v6j88","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-dns/node-resolver-pll77","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx","openshift-ovn-kubernetes/ovnkube-node-xdrfx"] Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.788010 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.789312 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.789578 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.791100 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.791617 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.791620 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.792229 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.792400 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.793295 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.793831 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.800297 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.800591 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.800548 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.800610 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.801519 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.803221 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.818540 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.819446 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v6j88" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.819706 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.819794 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.823652 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.823842 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.824471 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.824743 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.824940 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.832730 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.837495 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.837810 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.838536 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.838419 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.843268 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.844240 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.851202 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.852178 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.852590 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.853099 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.853269 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.853693 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.853772 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.858726 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.860863 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.861724 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.862250 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.862610 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.862998 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.865391 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.865704 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.866282 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.867309 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.870087 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.870430 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.870604 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.870634 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.870703 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.871611 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.871611 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.875841 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.875896 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.875843 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.875831 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.877144 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:13:38 crc kubenswrapper[5110]: E0130 00:13:38.877881 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.886157 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.886218 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.886237 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.886266 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.886286 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:38Z","lastTransitionTime":"2026-01-30T00:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.888823 5110 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.898726 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.914277 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.933379 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.948357 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.959840 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975759 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975822 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975854 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975881 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975905 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975928 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.975953 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.976000 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.976037 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.976522 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.976754 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.976768 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978594 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978704 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978760 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978815 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978888 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.978960 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979010 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979059 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979116 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979177 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979120 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979354 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979233 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979228 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979526 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979507 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979607 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979797 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979863 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979918 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979972 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980027 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980665 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980746 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980805 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980849 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980881 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980917 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980963 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980993 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981033 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981112 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981152 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981187 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981234 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981282 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981320 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981415 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981477 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981523 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981565 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981613 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981662 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981719 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981757 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981803 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981847 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981874 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981906 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981943 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981972 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982005 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982041 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982081 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982107 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982139 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982173 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982205 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982263 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982298 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982371 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982414 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982465 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.979712 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980003 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980535 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982518 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982566 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982614 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982665 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982720 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982767 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982821 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982841 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982802 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980712 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982896 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982999 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981000 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982898 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983095 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983176 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983205 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983323 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983373 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983395 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983456 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983494 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983578 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983605 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983629 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983657 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983839 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983878 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983929 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983959 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984311 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984367 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984397 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984427 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984455 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984486 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984528 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984552 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984580 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981458 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981869 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.981886 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982039 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982202 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982263 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982446 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.982468 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.980567 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983546 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983801 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.983924 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984080 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984187 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984575 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984664 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.984922 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985110 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985498 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985527 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985822 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.986147 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.986171 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.986234 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985461 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.987169 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.987238 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.987372 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.987785 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.987839 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988064 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988102 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988134 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.985943 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988348 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988375 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988410 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988439 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988461 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988490 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988517 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988545 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988566 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988593 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988619 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988641 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988665 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988712 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988758 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988786 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988811 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988832 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988855 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988877 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988900 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988925 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988943 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988976 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.988999 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989018 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989042 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989075 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989101 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989123 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989148 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989170 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989190 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989212 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989169 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989242 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989266 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989290 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989317 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989362 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989384 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989412 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989444 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989449 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989470 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989493 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989647 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989673 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989695 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989718 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989743 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989767 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989765 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989793 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989825 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989871 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989873 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989913 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.989978 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990007 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990011 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990037 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990063 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990052 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990091 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990127 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990153 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990179 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990249 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990547 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990583 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990667 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990780 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990854 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990915 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.990970 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991010 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991045 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991110 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991290 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991461 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991437 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991997 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.991827 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992228 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992392 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992463 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992562 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992774 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.992919 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993237 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993599 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993713 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993883 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993925 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.993959 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.994155 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.994721 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.994722 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.994779 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.994839 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995015 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995172 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995176 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995491 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995244 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995592 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995813 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.995923 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.996057 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.996107 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997219 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997327 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997663 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997681 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997858 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.997932 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.998065 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.999112 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.999512 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.999925 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5110]: I0130 00:13:38.999944 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:38.999938 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.000616 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.000652 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.000720 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.000739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.000895 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.001087 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.001197 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.001560 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.001963 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.002054 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.002387 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.002441 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.002720 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.002810 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003038 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003092 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003102 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003435 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003514 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003589 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.003675 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.004275 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.004423 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.004478 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.004542 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005007 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005009 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005085 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005206 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005426 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005486 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005625 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005639 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005663 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005930 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.005755 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006214 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006488 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006613 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006657 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006712 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006814 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006948 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.006957 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007181 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007384 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007414 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007395 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007852 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007873 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.007913 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008299 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008363 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008541 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008579 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008638 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008698 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008889 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.008995 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009013 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009181 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009193 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009413 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009485 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009771 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.009933 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.010140 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.010248 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.010325 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.010786 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.011173 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.011283 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.011536 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.011574 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.012388 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.012408 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.012679 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.012750 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.013114 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.013170 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.013191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.013220 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.013242 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.014116 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.014403 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.014724 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.015131 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.015576 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.015929 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.016121 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.017071 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.018445 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.018458 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.018556 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.018867 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019071 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019253 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019392 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019495 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019621 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.019700 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020062 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020130 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020506 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020573 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020812 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020882 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.020953 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.021094 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.021474 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.021491 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.021557 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.021917 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022071 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022077 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022195 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022404 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022687 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.022805 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023020 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023087 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023108 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023148 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023243 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023300 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023369 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.023667 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.523642882 +0000 UTC m=+81.481879021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023696 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023729 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023738 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023755 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023783 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023809 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023835 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023832 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023859 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023887 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023918 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023944 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023969 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.023995 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024021 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024051 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024091 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024129 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024157 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024219 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024764 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024868 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024887 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024926 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-multus-certs\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024962 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.024989 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025015 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025039 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-serviceca\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025063 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025068 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025123 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025166 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-bin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025215 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025288 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025197 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025475 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025585 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025655 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025753 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/97dc714a-5d84-4c81-99ef-13067437fcad-rootfs\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025800 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-kubelet\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025840 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sdgv\" (UniqueName: \"kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025878 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6lk\" (UniqueName: \"kubernetes.io/projected/97dc714a-5d84-4c81-99ef-13067437fcad-kube-api-access-gt6lk\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025923 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-daemon-config\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025962 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bf0b3ab-206c-49bb-a5bd-f177b968c344-hosts-file\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026001 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026060 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026111 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026163 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97dc714a-5d84-4c81-99ef-13067437fcad-proxy-tls\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025752 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.025879 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026269 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97dc714a-5d84-4c81-99ef-13067437fcad-mcd-auth-proxy-config\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026186 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026478 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.026677 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026915 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026970 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-etc-kubernetes\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.026997 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0bf0b3ab-206c-49bb-a5bd-f177b968c344-tmp-dir\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.027066 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.527043981 +0000 UTC m=+81.485280150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.027305 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-system-cni-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.027515 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.027487 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.027855 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-netns\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028021 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-multus\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028134 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028216 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kcp\" (UniqueName: \"kubernetes.io/projected/f47cb22d-f09e-43a7-95e0-0e1008827f08-kube-api-access-d8kcp\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028295 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz286\" (UniqueName: \"kubernetes.io/projected/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-kube-api-access-pz286\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028429 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cnibin\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028492 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-system-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028529 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-binary-copy\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028573 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028608 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028650 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028702 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028739 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028776 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-os-release\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028810 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-conf-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.028899 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029003 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029053 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029097 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029140 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029178 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029213 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029764 5110 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.029975 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030068 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-host\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030129 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjsx9\" (UniqueName: \"kubernetes.io/projected/560f3d2b-f6b8-42cd-9a6a-2c141c780302-kube-api-access-qjsx9\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030196 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030259 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9v2\" (UniqueName: \"kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030390 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697d4\" (UniqueName: \"kubernetes.io/projected/1fbd252e-c54f-4a19-b637-adb4d23722fc-kube-api-access-697d4\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.032276 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.032461 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.030704 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033199 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033404 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033787 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033775 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033855 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.033899 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034011 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-k8s-cni-cncf-io\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034042 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034170 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034024 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034210 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-cni-binary-copy\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034355 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-socket-dir-parent\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.034595 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.034766 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.534732213 +0000 UTC m=+81.492968382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034387 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.034974 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-cnibin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.035378 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.035643 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-os-release\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.035766 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.035892 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.035922 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-hostroot\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037045 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2zm\" (UniqueName: \"kubernetes.io/projected/0bf0b3ab-206c-49bb-a5bd-f177b968c344-kube-api-access-5m2zm\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037095 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037130 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037194 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037231 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037258 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037523 5110 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037558 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037595 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037616 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037633 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037646 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037659 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037672 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037684 5110 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.037698 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038035 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038164 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038199 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038222 5110 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038243 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038266 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038287 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038307 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038328 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038373 5110 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038402 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038462 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038486 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038508 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038530 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038550 5110 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038574 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038595 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038616 5110 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038639 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038660 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038681 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038704 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038727 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038746 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038768 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038789 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038810 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038830 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038849 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038877 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038896 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038916 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038936 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038963 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.038991 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039025 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039151 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039176 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039196 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039217 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039241 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039275 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039304 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039386 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039408 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039428 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039447 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039468 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039490 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039509 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039529 5110 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039548 5110 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039569 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039590 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039612 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039631 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039652 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039672 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039691 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039710 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039730 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039771 5110 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039792 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039811 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039829 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039848 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039866 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039890 5110 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039911 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039931 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039950 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.039968 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040133 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040162 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040206 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040219 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040229 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040241 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040251 5110 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040261 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040272 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040284 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040295 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040307 5110 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040317 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040357 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040370 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040382 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040393 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040406 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040416 5110 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040427 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040438 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040448 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040421 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040459 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040509 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040553 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040578 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040592 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040633 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040648 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040661 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040672 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040709 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040724 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040737 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040752 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040790 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040805 5110 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040817 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040829 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040842 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040879 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040893 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040905 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040917 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040955 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040969 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040982 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.040996 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041031 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041045 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041067 5110 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041079 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041117 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041131 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041144 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041157 5110 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041193 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041208 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041222 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041234 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041247 5110 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041286 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041302 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041314 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041356 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041373 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041385 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041397 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041413 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041457 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041470 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041490 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041510 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041533 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041551 5110 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041563 5110 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041576 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041588 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041601 5110 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041614 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041626 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041638 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041650 5110 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041664 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041681 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041698 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041715 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041730 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041745 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041764 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041780 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041799 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041815 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041830 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041846 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041862 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041879 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041898 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041915 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041930 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041944 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041959 5110 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041974 5110 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.041989 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042004 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042019 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042036 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042051 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042066 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042082 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042097 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042112 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042130 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042148 5110 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042166 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042182 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042202 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042218 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042234 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042251 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042268 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042283 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042299 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042316 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042407 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042428 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042444 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042463 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042480 5110 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.042497 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.043302 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.045966 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.045992 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046006 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046118 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046155 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046177 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046276 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.546139382 +0000 UTC m=+81.504375511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.046504 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.546493032 +0000 UTC m=+81.504729161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.048628 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.049201 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.053677 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.057453 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.060569 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.063073 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.064381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.065913 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.067263 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.076842 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.077566 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.079731 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.090348 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.091837 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.096714 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.105871 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.117730 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.117947 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.118010 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.117922 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.118133 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.118380 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.127212 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.130293 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.140681 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144144 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-binary-copy\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144265 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144300 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144343 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144375 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144399 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-os-release\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144422 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-conf-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144446 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144472 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144496 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144530 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144557 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144583 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144605 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-host\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144637 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjsx9\" (UniqueName: \"kubernetes.io/projected/560f3d2b-f6b8-42cd-9a6a-2c141c780302-kube-api-access-qjsx9\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144662 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144690 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9v2\" (UniqueName: \"kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144747 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144758 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-697d4\" (UniqueName: \"kubernetes.io/projected/1fbd252e-c54f-4a19-b637-adb4d23722fc-kube-api-access-697d4\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144832 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144865 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-k8s-cni-cncf-io\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144908 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144910 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144954 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.144927 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145024 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-cni-binary-copy\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145065 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-socket-dir-parent\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145107 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-cnibin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145154 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-os-release\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145176 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145174 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145272 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-os-release\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145399 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-os-release\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145408 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-conf-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145489 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-hostroot\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145592 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2zm\" (UniqueName: \"kubernetes.io/projected/0bf0b3ab-206c-49bb-a5bd-f177b968c344-kube-api-access-5m2zm\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145709 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145803 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145807 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-host\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145889 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-socket-dir-parent\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145781 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-hostroot\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145914 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-binary-copy\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145889 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145758 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.145946 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-k8s-cni-cncf-io\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146233 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-cnibin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146280 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146341 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146200 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146371 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-multus-certs\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146476 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146547 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146611 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146723 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-multus-certs\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146652 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146016 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146680 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146844 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146886 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146888 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.146912 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-serviceca\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147065 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-cni-binary-copy\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147133 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147159 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-bin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147215 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147235 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147254 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147301 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/97dc714a-5d84-4c81-99ef-13067437fcad-rootfs\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147320 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-kubelet\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147354 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7sdgv\" (UniqueName: \"kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147373 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6lk\" (UniqueName: \"kubernetes.io/projected/97dc714a-5d84-4c81-99ef-13067437fcad-kube-api-access-gt6lk\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147391 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-daemon-config\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147408 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bf0b3ab-206c-49bb-a5bd-f177b968c344-hosts-file\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147425 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147530 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97dc714a-5d84-4c81-99ef-13067437fcad-proxy-tls\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147552 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97dc714a-5d84-4c81-99ef-13067437fcad-mcd-auth-proxy-config\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147574 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147609 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-bin\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147584 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-etc-kubernetes\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147660 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0bf0b3ab-206c-49bb-a5bd-f177b968c344-tmp-dir\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147722 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-system-cni-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147746 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147765 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-netns\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147784 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-multus\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147804 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kcp\" (UniqueName: \"kubernetes.io/projected/f47cb22d-f09e-43a7-95e0-0e1008827f08-kube-api-access-d8kcp\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147822 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pz286\" (UniqueName: \"kubernetes.io/projected/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-kube-api-access-pz286\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147860 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147905 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/97dc714a-5d84-4c81-99ef-13067437fcad-rootfs\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.147925 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-kubelet\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148010 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148177 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-system-cni-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.148274 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.148360 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:13:39.648310383 +0000 UTC m=+81.606546512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.148422 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-20524913a8765d443c0efec8bed5e4499a692c52802432c81f2bc097eabbdbf9 WatchSource:0}: Error finding container 20524913a8765d443c0efec8bed5e4499a692c52802432c81f2bc097eabbdbf9: Status 404 returned error can't find the container with id 20524913a8765d443c0efec8bed5e4499a692c52802432c81f2bc097eabbdbf9 Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148478 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-run-netns\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148651 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148773 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-host-var-lib-cni-multus\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148805 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cnibin\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148830 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-system-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.148906 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-system-cni-dir\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.149079 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f47cb22d-f09e-43a7-95e0-0e1008827f08-etc-kubernetes\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.149598 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.150398 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.150491 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cnibin\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.150603 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f47cb22d-f09e-43a7-95e0-0e1008827f08-multus-daemon-config\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151112 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151155 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151181 5110 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151202 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151238 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151260 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151283 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151305 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151392 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.151310 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.152758 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-serviceca\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.152987 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.153039 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:13:39 crc kubenswrapper[5110]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:13:39 crc kubenswrapper[5110]: ho_enable="--enable-hybrid-overlay" Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:13:39 crc kubenswrapper[5110]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:13:39 crc kubenswrapper[5110]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-host=127.0.0.1 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-port=9743 \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ho_enable} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-interconnect \ Jan 30 00:13:39 crc kubenswrapper[5110]: --disable-approver \ Jan 30 00:13:39 crc kubenswrapper[5110]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --wait-for-kubernetes-api=200s \ Jan 30 00:13:39 crc kubenswrapper[5110]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.153130 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.153212 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bf0b3ab-206c-49bb-a5bd-f177b968c344-hosts-file\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.154076 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.155068 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.155131 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.155244 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0bf0b3ab-206c-49bb-a5bd-f177b968c344-tmp-dir\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.156403 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97dc714a-5d84-4c81-99ef-13067437fcad-mcd-auth-proxy-config\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.157233 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --disable-webhook \ Jan 30 00:13:39 crc kubenswrapper[5110]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.159068 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.161801 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97dc714a-5d84-4c81-99ef-13067437fcad-proxy-tls\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.164442 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/560f3d2b-f6b8-42cd-9a6a-2c141c780302-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.166422 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2zm\" (UniqueName: \"kubernetes.io/projected/0bf0b3ab-206c-49bb-a5bd-f177b968c344-kube-api-access-5m2zm\") pod \"node-resolver-pll77\" (UID: \"0bf0b3ab-206c-49bb-a5bd-f177b968c344\") " pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.168498 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjsx9\" (UniqueName: \"kubernetes.io/projected/560f3d2b-f6b8-42cd-9a6a-2c141c780302-kube-api-access-qjsx9\") pod \"multus-additional-cni-plugins-jf6rt\" (UID: \"560f3d2b-f6b8-42cd-9a6a-2c141c780302\") " pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.173029 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.174307 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.175601 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sdgv\" (UniqueName: \"kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv\") pod \"ovnkube-node-xdrfx\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.177982 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9v2\" (UniqueName: \"kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2\") pod \"ovnkube-control-plane-57b78d8988-xfqbx\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.183707 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6lk\" (UniqueName: \"kubernetes.io/projected/97dc714a-5d84-4c81-99ef-13067437fcad-kube-api-access-gt6lk\") pod \"machine-config-daemon-t6dv6\" (UID: \"97dc714a-5d84-4c81-99ef-13067437fcad\") " pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.183761 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz286\" (UniqueName: \"kubernetes.io/projected/7f9b7ad1-23e2-4a81-a158-29a14e73eed5-kube-api-access-pz286\") pod \"node-ca-kz9zz\" (UID: \"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\") " pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.186062 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.187324 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kz9zz" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.188895 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-697d4\" (UniqueName: \"kubernetes.io/projected/1fbd252e-c54f-4a19-b637-adb4d23722fc-kube-api-access-697d4\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.191110 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kcp\" (UniqueName: \"kubernetes.io/projected/f47cb22d-f09e-43a7-95e0-0e1008827f08-kube-api-access-d8kcp\") pod \"multus-v6j88\" (UID: \"f47cb22d-f09e-43a7-95e0-0e1008827f08\") " pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.197906 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.198767 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.199275 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9b7ad1_23e2_4a81_a158_29a14e73eed5.slice/crio-cf304e1abb88e5d5e5291d9a4e0c137ffa5e404643457ec83b18f8f504d80461 WatchSource:0}: Error finding container cf304e1abb88e5d5e5291d9a4e0c137ffa5e404643457ec83b18f8f504d80461: Status 404 returned error can't find the container with id cf304e1abb88e5d5e5291d9a4e0c137ffa5e404643457ec83b18f8f504d80461 Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.202593 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:13:39 crc kubenswrapper[5110]: while [ true ]; Jan 30 00:13:39 crc kubenswrapper[5110]: do Jan 30 00:13:39 crc kubenswrapper[5110]: for f in $(ls /tmp/serviceca); do Jan 30 00:13:39 crc kubenswrapper[5110]: echo $f Jan 30 00:13:39 crc kubenswrapper[5110]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:13:39 crc kubenswrapper[5110]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:13:39 crc kubenswrapper[5110]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:13:39 crc kubenswrapper[5110]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: else Jan 30 00:13:39 crc kubenswrapper[5110]: mkdir $reg_dir_path Jan 30 00:13:39 crc kubenswrapper[5110]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:13:39 crc kubenswrapper[5110]: echo $d Jan 30 00:13:39 crc kubenswrapper[5110]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:13:39 crc kubenswrapper[5110]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:13:39 crc kubenswrapper[5110]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: rm -rf /etc/docker/certs.d/$d Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait ${!} Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pz286,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-kz9zz_openshift-image-registry(7f9b7ad1-23e2-4a81-a158-29a14e73eed5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.203803 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-kz9zz" podUID="7f9b7ad1-23e2-4a81-a158-29a14e73eed5" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.209527 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.210967 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:13:39 crc kubenswrapper[5110]: apiVersion: v1 Jan 30 00:13:39 crc kubenswrapper[5110]: clusters: Jan 30 00:13:39 crc kubenswrapper[5110]: - cluster: Jan 30 00:13:39 crc kubenswrapper[5110]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: server: https://api-int.crc.testing:6443 Jan 30 00:13:39 crc kubenswrapper[5110]: name: default-cluster Jan 30 00:13:39 crc kubenswrapper[5110]: contexts: Jan 30 00:13:39 crc kubenswrapper[5110]: - context: Jan 30 00:13:39 crc kubenswrapper[5110]: cluster: default-cluster Jan 30 00:13:39 crc kubenswrapper[5110]: namespace: default Jan 30 00:13:39 crc kubenswrapper[5110]: user: default-auth Jan 30 00:13:39 crc kubenswrapper[5110]: name: default-context Jan 30 00:13:39 crc kubenswrapper[5110]: current-context: default-context Jan 30 00:13:39 crc kubenswrapper[5110]: kind: Config Jan 30 00:13:39 crc kubenswrapper[5110]: preferences: {} Jan 30 00:13:39 crc kubenswrapper[5110]: users: Jan 30 00:13:39 crc kubenswrapper[5110]: - name: default-auth Jan 30 00:13:39 crc kubenswrapper[5110]: user: Jan 30 00:13:39 crc kubenswrapper[5110]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:13:39 crc kubenswrapper[5110]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:13:39 crc kubenswrapper[5110]: EOF Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sdgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-xdrfx_openshift-ovn-kubernetes(89a63cd7-c2e9-4666-a363-aa6f67187756): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.212126 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.212406 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220153 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220199 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220215 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220252 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220295 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.220982 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.223666 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97dc714a_5d84_4c81_99ef_13067437fcad.slice/crio-413db38027dea02a7a6931ab6dc8240c2e4a773316f9157d2aa0bf0a3f5b43ae WatchSource:0}: Error finding container 413db38027dea02a7a6931ab6dc8240c2e4a773316f9157d2aa0bf0a3f5b43ae: Status 404 returned error can't find the container with id 413db38027dea02a7a6931ab6dc8240c2e4a773316f9157d2aa0bf0a3f5b43ae Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.225147 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.227086 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-t6dv6_openshift-machine-config-operator(97dc714a-5d84-4c81-99ef-13067437fcad): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.229090 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pll77" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.229290 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-t6dv6_openshift-machine-config-operator(97dc714a-5d84-4c81-99ef-13067437fcad): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.230860 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.237911 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.238835 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjsx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jf6rt_openshift-multus(560f3d2b-f6b8-42cd-9a6a-2c141c780302): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.238929 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.240044 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" podUID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.248223 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf0b3ab_206c_49bb_a5bd_f177b968c344.slice/crio-cda2219eec0bc9c3173df9ec44b3ffdb86b68243ac042009fedd45d01f3507ee WatchSource:0}: Error finding container cda2219eec0bc9c3173df9ec44b3ffdb86b68243ac042009fedd45d01f3507ee: Status 404 returned error can't find the container with id cda2219eec0bc9c3173df9ec44b3ffdb86b68243ac042009fedd45d01f3507ee Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.248756 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.251239 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:39 crc kubenswrapper[5110]: set -uo pipefail Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:13:39 crc kubenswrapper[5110]: HOSTS_FILE="/etc/hosts" Jan 30 00:13:39 crc kubenswrapper[5110]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:13:39 crc kubenswrapper[5110]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:13:39 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: while true; do Jan 30 00:13:39 crc kubenswrapper[5110]: declare -A svc_ips Jan 30 00:13:39 crc kubenswrapper[5110]: for svc in "${services[@]}"; do Jan 30 00:13:39 crc kubenswrapper[5110]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:13:39 crc kubenswrapper[5110]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:13:39 crc kubenswrapper[5110]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:13:39 crc kubenswrapper[5110]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:13:39 crc kubenswrapper[5110]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:13:39 crc kubenswrapper[5110]: for i in ${!cmds[*]} Jan 30 00:13:39 crc kubenswrapper[5110]: do Jan 30 00:13:39 crc kubenswrapper[5110]: ips=($(eval "${cmds[i]}")) Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:13:39 crc kubenswrapper[5110]: break Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:13:39 crc kubenswrapper[5110]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:13:39 crc kubenswrapper[5110]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:13:39 crc kubenswrapper[5110]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:13:39 crc kubenswrapper[5110]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: continue Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Append resolver entries for services Jan 30 00:13:39 crc kubenswrapper[5110]: rc=0 Jan 30 00:13:39 crc kubenswrapper[5110]: for svc in "${!svc_ips[@]}"; do Jan 30 00:13:39 crc kubenswrapper[5110]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:13:39 crc kubenswrapper[5110]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ $rc -ne 0 ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: continue Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:13:39 crc kubenswrapper[5110]: # Replace /etc/hosts with our modified version if needed Jan 30 00:13:39 crc kubenswrapper[5110]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:13:39 crc kubenswrapper[5110]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: unset svc_ips Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m2zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pll77_openshift-dns(0bf0b3ab-206c-49bb-a5bd-f177b968c344): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.253001 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pll77" podUID="0bf0b3ab-206c-49bb-a5bd-f177b968c344" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.260104 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.267869 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:39 crc kubenswrapper[5110]: set -euo pipefail Jan 30 00:13:39 crc kubenswrapper[5110]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:13:39 crc kubenswrapper[5110]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:13:39 crc kubenswrapper[5110]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:13:39 crc kubenswrapper[5110]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:13:39 crc kubenswrapper[5110]: TS=$(date +%s) Jan 30 00:13:39 crc kubenswrapper[5110]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:13:39 crc kubenswrapper[5110]: HAS_LOGGED_INFO=0 Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: log_missing_certs(){ Jan 30 00:13:39 crc kubenswrapper[5110]: CUR_TS=$(date +%s) Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:13:39 crc kubenswrapper[5110]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:13:39 crc kubenswrapper[5110]: HAS_LOGGED_INFO=1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: } Jan 30 00:13:39 crc kubenswrapper[5110]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:13:39 crc kubenswrapper[5110]: log_missing_certs Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 5 Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:13:39 crc kubenswrapper[5110]: --logtostderr \ Jan 30 00:13:39 crc kubenswrapper[5110]: --secure-listen-address=:9108 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-cert-file=${TLS_CERT} Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc9v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xfqbx_openshift-ovn-kubernetes(a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.270105 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # This is needed so that converting clusters from GA to TP Jan 30 00:13:39 crc kubenswrapper[5110]: # will rollout control plane pods as well Jan 30 00:13:39 crc kubenswrapper[5110]: network_segmentation_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" != "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: route_advertisements_enable_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_policy_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:13:39 crc kubenswrapper[5110]: admin_network_policy_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: if [ "shared" == "shared" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:13:39 crc kubenswrapper[5110]: elif [ "shared" == "local" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode local" Jan 30 00:13:39 crc kubenswrapper[5110]: else Jan 30 00:13:39 crc kubenswrapper[5110]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:13:39 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-interconnect \ Jan 30 00:13:39 crc kubenswrapper[5110]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-enable-pprof \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-enable-config-duration \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${persistent_ips_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${multi_network_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${network_segmentation_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${gateway_mode_flags} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${route_advertisements_enable_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-ip=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-firewall=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-qos=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-service=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-multicast \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-multi-external-gateway=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${multi_network_policy_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${admin_network_policy_enabled_flag} Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc9v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xfqbx_openshift-ovn-kubernetes(a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.271277 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.273441 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.284483 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.298850 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.323000 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.323075 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.323097 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.323125 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.323143 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.324705 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.325550 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"413db38027dea02a7a6931ab6dc8240c2e4a773316f9157d2aa0bf0a3f5b43ae"} Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.327881 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-t6dv6_openshift-machine-config-operator(97dc714a-5d84-4c81-99ef-13067437fcad): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.327963 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kz9zz" event={"ID":"7f9b7ad1-23e2-4a81-a158-29a14e73eed5","Type":"ContainerStarted","Data":"cf304e1abb88e5d5e5291d9a4e0c137ffa5e404643457ec83b18f8f504d80461"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.329197 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerStarted","Data":"b5f6113d220beb4c2ed642925c6763d74296c34df30a5e7a11dbaee8ec6367a1"} Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.336645 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:39 crc kubenswrapper[5110]: set -euo pipefail Jan 30 00:13:39 crc kubenswrapper[5110]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:13:39 crc kubenswrapper[5110]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:13:39 crc kubenswrapper[5110]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:13:39 crc kubenswrapper[5110]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:13:39 crc kubenswrapper[5110]: TS=$(date +%s) Jan 30 00:13:39 crc kubenswrapper[5110]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:13:39 crc kubenswrapper[5110]: HAS_LOGGED_INFO=0 Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: log_missing_certs(){ Jan 30 00:13:39 crc kubenswrapper[5110]: CUR_TS=$(date +%s) Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:13:39 crc kubenswrapper[5110]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:13:39 crc kubenswrapper[5110]: HAS_LOGGED_INFO=1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: } Jan 30 00:13:39 crc kubenswrapper[5110]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:13:39 crc kubenswrapper[5110]: log_missing_certs Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 5 Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:13:39 crc kubenswrapper[5110]: --logtostderr \ Jan 30 00:13:39 crc kubenswrapper[5110]: --secure-listen-address=:9108 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --tls-cert-file=${TLS_CERT} Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc9v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xfqbx_openshift-ovn-kubernetes(a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.336767 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.337027 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:13:39 crc kubenswrapper[5110]: while [ true ]; Jan 30 00:13:39 crc kubenswrapper[5110]: do Jan 30 00:13:39 crc kubenswrapper[5110]: for f in $(ls /tmp/serviceca); do Jan 30 00:13:39 crc kubenswrapper[5110]: echo $f Jan 30 00:13:39 crc kubenswrapper[5110]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:13:39 crc kubenswrapper[5110]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:13:39 crc kubenswrapper[5110]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:13:39 crc kubenswrapper[5110]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: else Jan 30 00:13:39 crc kubenswrapper[5110]: mkdir $reg_dir_path Jan 30 00:13:39 crc kubenswrapper[5110]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:13:39 crc kubenswrapper[5110]: echo $d Jan 30 00:13:39 crc kubenswrapper[5110]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:13:39 crc kubenswrapper[5110]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:13:39 crc kubenswrapper[5110]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: rm -rf /etc/docker/certs.d/$d Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait ${!} Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pz286,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-kz9zz_openshift-image-registry(7f9b7ad1-23e2-4a81-a158-29a14e73eed5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.337397 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-t6dv6_openshift-machine-config-operator(97dc714a-5d84-4c81-99ef-13067437fcad): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.338000 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pll77" event={"ID":"0bf0b3ab-206c-49bb-a5bd-f177b968c344","Type":"ContainerStarted","Data":"cda2219eec0bc9c3173df9ec44b3ffdb86b68243ac042009fedd45d01f3507ee"} Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.338187 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-kz9zz" podUID="7f9b7ad1-23e2-4a81-a158-29a14e73eed5" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.338539 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.339323 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # This is needed so that converting clusters from GA to TP Jan 30 00:13:39 crc kubenswrapper[5110]: # will rollout control plane pods as well Jan 30 00:13:39 crc kubenswrapper[5110]: network_segmentation_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" != "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: route_advertisements_enable_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_policy_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:13:39 crc kubenswrapper[5110]: admin_network_policy_enabled_flag= Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: if [ "shared" == "shared" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:13:39 crc kubenswrapper[5110]: elif [ "shared" == "local" ]; then Jan 30 00:13:39 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode local" Jan 30 00:13:39 crc kubenswrapper[5110]: else Jan 30 00:13:39 crc kubenswrapper[5110]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:13:39 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-interconnect \ Jan 30 00:13:39 crc kubenswrapper[5110]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-enable-pprof \ Jan 30 00:13:39 crc kubenswrapper[5110]: --metrics-enable-config-duration \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${persistent_ips_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${multi_network_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${network_segmentation_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${gateway_mode_flags} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${route_advertisements_enable_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-ip=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-firewall=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-qos=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-egress-service=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-multicast \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-multi-external-gateway=true \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${multi_network_policy_enabled_flag} \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${admin_network_policy_enabled_flag} Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc9v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xfqbx_openshift-ovn-kubernetes(a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.340315 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:39 crc kubenswrapper[5110]: set -uo pipefail Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:13:39 crc kubenswrapper[5110]: HOSTS_FILE="/etc/hosts" Jan 30 00:13:39 crc kubenswrapper[5110]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:13:39 crc kubenswrapper[5110]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:13:39 crc kubenswrapper[5110]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:13:39 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: while true; do Jan 30 00:13:39 crc kubenswrapper[5110]: declare -A svc_ips Jan 30 00:13:39 crc kubenswrapper[5110]: for svc in "${services[@]}"; do Jan 30 00:13:39 crc kubenswrapper[5110]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:13:39 crc kubenswrapper[5110]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:13:39 crc kubenswrapper[5110]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:13:39 crc kubenswrapper[5110]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:13:39 crc kubenswrapper[5110]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:13:39 crc kubenswrapper[5110]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:13:39 crc kubenswrapper[5110]: for i in ${!cmds[*]} Jan 30 00:13:39 crc kubenswrapper[5110]: do Jan 30 00:13:39 crc kubenswrapper[5110]: ips=($(eval "${cmds[i]}")) Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:13:39 crc kubenswrapper[5110]: break Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:13:39 crc kubenswrapper[5110]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:13:39 crc kubenswrapper[5110]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:13:39 crc kubenswrapper[5110]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:13:39 crc kubenswrapper[5110]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: continue Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # Append resolver entries for services Jan 30 00:13:39 crc kubenswrapper[5110]: rc=0 Jan 30 00:13:39 crc kubenswrapper[5110]: for svc in "${!svc_ips[@]}"; do Jan 30 00:13:39 crc kubenswrapper[5110]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:13:39 crc kubenswrapper[5110]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ $rc -ne 0 ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: continue Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:13:39 crc kubenswrapper[5110]: # Replace /etc/hosts with our modified version if needed Jan 30 00:13:39 crc kubenswrapper[5110]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:13:39 crc kubenswrapper[5110]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: sleep 60 & wait Jan 30 00:13:39 crc kubenswrapper[5110]: unset svc_ips Jan 30 00:13:39 crc kubenswrapper[5110]: done Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m2zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pll77_openshift-dns(0bf0b3ab-206c-49bb-a5bd-f177b968c344): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.342664 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pll77" podUID="0bf0b3ab-206c-49bb-a5bd-f177b968c344" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.342795 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerStarted","Data":"3c5758decf39cddd8dacc96a265876236060698fb7f7c7e12bf3f061689768df"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.344885 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"081281f4c7a2623bf2b29f821f09a92bc6c1ce88e6948cc8e2ed4b30e6e60fc9"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.346096 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"20524913a8765d443c0efec8bed5e4499a692c52802432c81f2bc097eabbdbf9"} Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.348277 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:13:39 crc kubenswrapper[5110]: apiVersion: v1 Jan 30 00:13:39 crc kubenswrapper[5110]: clusters: Jan 30 00:13:39 crc kubenswrapper[5110]: - cluster: Jan 30 00:13:39 crc kubenswrapper[5110]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:13:39 crc kubenswrapper[5110]: server: https://api-int.crc.testing:6443 Jan 30 00:13:39 crc kubenswrapper[5110]: name: default-cluster Jan 30 00:13:39 crc kubenswrapper[5110]: contexts: Jan 30 00:13:39 crc kubenswrapper[5110]: - context: Jan 30 00:13:39 crc kubenswrapper[5110]: cluster: default-cluster Jan 30 00:13:39 crc kubenswrapper[5110]: namespace: default Jan 30 00:13:39 crc kubenswrapper[5110]: user: default-auth Jan 30 00:13:39 crc kubenswrapper[5110]: name: default-context Jan 30 00:13:39 crc kubenswrapper[5110]: current-context: default-context Jan 30 00:13:39 crc kubenswrapper[5110]: kind: Config Jan 30 00:13:39 crc kubenswrapper[5110]: preferences: {} Jan 30 00:13:39 crc kubenswrapper[5110]: users: Jan 30 00:13:39 crc kubenswrapper[5110]: - name: default-auth Jan 30 00:13:39 crc kubenswrapper[5110]: user: Jan 30 00:13:39 crc kubenswrapper[5110]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:13:39 crc kubenswrapper[5110]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:13:39 crc kubenswrapper[5110]: EOF Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sdgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-xdrfx_openshift-ovn-kubernetes(89a63cd7-c2e9-4666-a363-aa6f67187756): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.348827 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:13:39 crc kubenswrapper[5110]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:13:39 crc kubenswrapper[5110]: ho_enable="--enable-hybrid-overlay" Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:13:39 crc kubenswrapper[5110]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:13:39 crc kubenswrapper[5110]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-host=127.0.0.1 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --webhook-port=9743 \ Jan 30 00:13:39 crc kubenswrapper[5110]: ${ho_enable} \ Jan 30 00:13:39 crc kubenswrapper[5110]: --enable-interconnect \ Jan 30 00:13:39 crc kubenswrapper[5110]: --disable-approver \ Jan 30 00:13:39 crc kubenswrapper[5110]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --wait-for-kubernetes-api=200s \ Jan 30 00:13:39 crc kubenswrapper[5110]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.348887 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjsx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jf6rt_openshift-multus(560f3d2b-f6b8-42cd-9a6a-2c141c780302): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.349440 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.349961 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.350057 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" podUID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.351364 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: source "/env/_master" Jan 30 00:13:39 crc kubenswrapper[5110]: set +o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: Jan 30 00:13:39 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:13:39 crc kubenswrapper[5110]: --disable-webhook \ Jan 30 00:13:39 crc kubenswrapper[5110]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:13:39 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.353470 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.353542 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.367293 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.378805 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.386816 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.396481 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.413962 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.414795 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.425459 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.425544 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.425570 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.425601 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.425623 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.430037 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.430954 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-ac68eb6dda9252f8a35e558e39dd88e9924d74a77a6dd6c43573938d8247ba2e WatchSource:0}: Error finding container ac68eb6dda9252f8a35e558e39dd88e9924d74a77a6dd6c43573938d8247ba2e: Status 404 returned error can't find the container with id ac68eb6dda9252f8a35e558e39dd88e9924d74a77a6dd6c43573938d8247ba2e Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.434726 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:39 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:39 crc kubenswrapper[5110]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:13:39 crc kubenswrapper[5110]: source /etc/kubernetes/apiserver-url.env Jan 30 00:13:39 crc kubenswrapper[5110]: else Jan 30 00:13:39 crc kubenswrapper[5110]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:13:39 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:39 crc kubenswrapper[5110]: fi Jan 30 00:13:39 crc kubenswrapper[5110]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:13:39 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.436011 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.451178 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.465021 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.468447 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-0ee1e39785ce775a7392e46ceea43a441d3c219cc945d9b0bbcabf899b04beb4 WatchSource:0}: Error finding container 0ee1e39785ce775a7392e46ceea43a441d3c219cc945d9b0bbcabf899b04beb4: Status 404 returned error can't find the container with id 0ee1e39785ce775a7392e46ceea43a441d3c219cc945d9b0bbcabf899b04beb4 Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.471303 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v6j88" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.472180 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.474530 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:13:39 crc kubenswrapper[5110]: W0130 00:13:39.491673 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf47cb22d_f09e_43a7_95e0_0e1008827f08.slice/crio-d22bb2883ad9426ca62adf4ec4b2c0d1b1fd2a504bf329ad02fcc3dea5e28f1d WatchSource:0}: Error finding container d22bb2883ad9426ca62adf4ec4b2c0d1b1fd2a504bf329ad02fcc3dea5e28f1d: Status 404 returned error can't find the container with id d22bb2883ad9426ca62adf4ec4b2c0d1b1fd2a504bf329ad02fcc3dea5e28f1d Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.495731 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:39 crc kubenswrapper[5110]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:13:39 crc kubenswrapper[5110]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:13:39 crc kubenswrapper[5110]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8kcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-v6j88_openshift-multus(f47cb22d-f09e-43a7-95e0-0e1008827f08): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:39 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.496907 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-v6j88" podUID="f47cb22d-f09e-43a7-95e0-0e1008827f08" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.504239 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.527911 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.528001 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.528044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.528064 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.528076 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.544359 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.559946 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.560115 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.560159 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.560204 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.560233 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560491 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560525 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560541 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560618 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.56059569 +0000 UTC m=+82.518831829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560731 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560779 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560810 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560826 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560890 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.560850577 +0000 UTC m=+82.519086746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.560952 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.560902148 +0000 UTC m=+82.519138277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.561024 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.561009471 +0000 UTC m=+82.519245640 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.562315 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.562403 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.562388797 +0000 UTC m=+82.520624926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.588391 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.626516 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.630576 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.630634 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.630656 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.630688 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.630711 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.661933 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.662091 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.662305 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:13:40.662252167 +0000 UTC m=+82.620488306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.665589 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.705798 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.734221 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.734288 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.734302 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.734323 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.734358 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.746462 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.793666 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.828001 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.837711 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.837771 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.837789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.837817 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.837835 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.872042 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.872034 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: E0130 00:13:39.872258 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.904483 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.940762 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.940822 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.940835 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.940853 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.940865 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:39Z","lastTransitionTime":"2026-01-30T00:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.946022 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:39 crc kubenswrapper[5110]: I0130 00:13:39.982446 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.028894 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.043175 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.043244 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.043265 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.043291 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.043310 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.074963 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.101656 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.143519 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.145362 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.145432 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.145453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.145480 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.145508 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.184517 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.223966 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.249084 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.249161 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.249180 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.249210 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.249233 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.264306 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.302656 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.346634 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351006 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"0ee1e39785ce775a7392e46ceea43a441d3c219cc945d9b0bbcabf899b04beb4"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351616 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351703 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351725 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351755 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.351774 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.353148 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v6j88" event={"ID":"f47cb22d-f09e-43a7-95e0-0e1008827f08","Type":"ContainerStarted","Data":"d22bb2883ad9426ca62adf4ec4b2c0d1b1fd2a504bf329ad02fcc3dea5e28f1d"} Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.353565 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.354708 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.355058 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"ac68eb6dda9252f8a35e558e39dd88e9924d74a77a6dd6c43573938d8247ba2e"} Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.356206 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:40 crc kubenswrapper[5110]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:13:40 crc kubenswrapper[5110]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:13:40 crc kubenswrapper[5110]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8kcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-v6j88_openshift-multus(f47cb22d-f09e-43a7-95e0-0e1008827f08): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:40 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.357427 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-v6j88" podUID="f47cb22d-f09e-43a7-95e0-0e1008827f08" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.358065 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:13:40 crc kubenswrapper[5110]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:13:40 crc kubenswrapper[5110]: set -o allexport Jan 30 00:13:40 crc kubenswrapper[5110]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:13:40 crc kubenswrapper[5110]: source /etc/kubernetes/apiserver-url.env Jan 30 00:13:40 crc kubenswrapper[5110]: else Jan 30 00:13:40 crc kubenswrapper[5110]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:13:40 crc kubenswrapper[5110]: exit 1 Jan 30 00:13:40 crc kubenswrapper[5110]: fi Jan 30 00:13:40 crc kubenswrapper[5110]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:13:40 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:13:40 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.359283 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.396087 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.439555 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.454809 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.454899 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.454921 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.454949 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.454971 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.463562 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.501986 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.541005 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.557927 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.557993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.558015 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.558041 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.558064 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.584923 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.597402 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.597809 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.597774772 +0000 UTC m=+84.556010901 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.597983 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.598107 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.598232 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.598346 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598461 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598503 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598530 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598590 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598626 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598638 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.598612704 +0000 UTC m=+84.556848863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598680 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.599053 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.598907 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.598865821 +0000 UTC m=+84.557101990 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.599105 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.599145 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.599117277 +0000 UTC m=+84.557353446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.599263 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.59921562 +0000 UTC m=+84.557451789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.620602 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663635 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663706 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663721 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663741 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663758 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.663839 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.699876 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.700083 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.700163 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:13:42.700140168 +0000 UTC m=+84.658376297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.706759 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.745745 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.766744 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.766842 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.766861 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.766880 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.766895 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.786153 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.826259 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.867018 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.869241 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.869327 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.869398 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.869432 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.869463 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.871768 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.871936 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.871980 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.872190 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.871780 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:40 crc kubenswrapper[5110]: E0130 00:13:40.872390 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.880880 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.882592 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.886192 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.888707 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.894303 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.901190 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.903001 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.905299 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.906148 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.908984 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.911232 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.912754 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.916525 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.919138 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.922803 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.923822 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.925585 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.928043 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.930643 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.934024 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.936279 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.939412 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.943546 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.946122 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.947908 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.948857 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.950006 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.952146 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.954470 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.956787 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.961150 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.962298 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.965891 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.967731 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.972479 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.972870 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.973061 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.973225 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.973399 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.973549 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:40Z","lastTransitionTime":"2026-01-30T00:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.975590 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.988790 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.990200 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.992107 5110 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 00:13:40 crc kubenswrapper[5110]: I0130 00:13:40.992687 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.000038 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.003428 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.005838 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.008789 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.010081 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.013481 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.015062 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.017169 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.019147 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.022985 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.024376 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.025728 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.028167 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.030759 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.032526 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.035440 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.038227 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.041514 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.044430 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.046461 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.049377 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.050685 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.072491 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.076764 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.076810 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.076830 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.076850 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.076865 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.103969 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.141577 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.179530 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.179941 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.180100 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.180254 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.180455 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.190250 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.225871 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.265196 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.283357 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.283385 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.283394 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.283410 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.283421 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.303661 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.349247 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.384109 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.385185 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.385219 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.385229 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.385243 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.385255 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.437360 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.485606 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.488215 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.488251 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.488262 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.488296 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.488311 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.503787 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.544414 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.586788 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.591047 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.591123 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.591143 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.591171 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.591191 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.628030 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.664742 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.694156 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.694248 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.694269 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.694301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.694323 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.705787 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.749139 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.787831 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.797482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.797661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.797684 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.797712 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.797732 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.827605 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.867470 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.871832 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:41 crc kubenswrapper[5110]: E0130 00:13:41.872026 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.900585 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.900667 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.900689 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.900716 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:41 crc kubenswrapper[5110]: I0130 00:13:41.900736 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:41Z","lastTransitionTime":"2026-01-30T00:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.004011 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.004066 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.004086 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.004112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.004134 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.107284 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.107393 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.107411 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.107435 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.107453 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.210701 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.210761 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.210777 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.210798 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.210814 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.313779 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.313854 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.313873 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.313930 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.313950 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.417406 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.417466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.417477 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.417494 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.417506 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.520211 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.520256 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.520283 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.520301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.520310 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.622011 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.622131 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.622189 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.622242 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622475 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.622435702 +0000 UTC m=+88.580671841 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622598 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622662 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622692 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622754 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622772 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622805 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622817 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622826 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.622810842 +0000 UTC m=+88.581046981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622841 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.622600 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622936 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.622899104 +0000 UTC m=+88.581135233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.622963 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.622954105 +0000 UTC m=+88.581190234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.623396 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.623361776 +0000 UTC m=+88.581598155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.623945 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.623995 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.624008 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.624031 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.624047 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.724183 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.725048 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.725562 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:13:46.725321861 +0000 UTC m=+88.683558020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.726851 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.726916 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.726938 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.726968 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.726988 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.830153 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.830296 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.830411 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.830452 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.830486 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.872814 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.872813 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.873052 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.873327 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.873474 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:42 crc kubenswrapper[5110]: E0130 00:13:42.873699 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.933547 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.933609 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.933628 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.933655 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:42 crc kubenswrapper[5110]: I0130 00:13:42.933678 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:42Z","lastTransitionTime":"2026-01-30T00:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.036799 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.036923 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.036954 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.037002 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.037032 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.139967 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.140052 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.140071 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.140102 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.140127 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.243386 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.243462 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.243483 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.243510 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.243530 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.345867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.345941 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.345959 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.345986 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.346007 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.449948 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.450025 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.450045 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.450072 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.450091 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.553845 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.553943 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.553972 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.554008 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.554036 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.657698 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.657764 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.657783 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.657809 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.657832 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.760390 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.760710 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.760913 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.761138 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.761390 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.851463 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.864159 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.864252 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.864273 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.864304 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.864324 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.871491 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:43 crc kubenswrapper[5110]: E0130 00:13:43.871842 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.968050 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.968140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.968161 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.968192 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:43 crc kubenswrapper[5110]: I0130 00:13:43.968280 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:43Z","lastTransitionTime":"2026-01-30T00:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.071723 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.071793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.071819 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.071844 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.071862 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.174623 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.174686 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.174698 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.174718 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.174732 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.277667 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.277735 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.277756 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.277787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.277808 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.380776 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.380847 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.380867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.380892 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.380911 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.484314 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.484459 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.484487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.484524 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.484555 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.587796 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.587867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.587888 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.587915 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.587937 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.691285 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.691369 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.691389 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.691412 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.691434 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.795071 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.795189 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.795247 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.795281 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.795322 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.871906 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.871988 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.871905 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:44 crc kubenswrapper[5110]: E0130 00:13:44.872126 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:44 crc kubenswrapper[5110]: E0130 00:13:44.872245 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:44 crc kubenswrapper[5110]: E0130 00:13:44.872403 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.897650 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.897784 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.897812 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.897847 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:44 crc kubenswrapper[5110]: I0130 00:13:44.897875 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:44Z","lastTransitionTime":"2026-01-30T00:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.001810 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.001881 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.001901 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.001929 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.001949 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.105063 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.105140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.105167 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.105197 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.105216 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.208372 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.208430 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.208440 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.208457 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.208469 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.311994 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.312068 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.312093 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.312119 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.312139 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.414926 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.414989 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.415005 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.415029 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.415102 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.518270 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.518387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.518418 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.518453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.518477 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.621699 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.622594 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.622970 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.623286 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.623497 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.726962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.727017 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.727030 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.727051 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.727064 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.829533 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.829622 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.829642 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.829671 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.829762 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.871953 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:45 crc kubenswrapper[5110]: E0130 00:13:45.872205 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.933587 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.933694 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.933731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.933780 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:45 crc kubenswrapper[5110]: I0130 00:13:45.933811 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:45Z","lastTransitionTime":"2026-01-30T00:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.038186 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.038744 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.039211 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.039556 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.039899 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.144041 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.144122 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.144143 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.144172 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.144192 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.248019 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.248560 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.248729 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.248897 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.249045 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.352499 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.353052 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.353261 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.353495 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.353701 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.457291 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.457732 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.457904 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.458090 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.458372 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.561463 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.561966 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.562164 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.562426 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.562668 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.666568 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.666655 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.666676 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.666704 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.666727 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.675386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.675592 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.675674 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.675732 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.675794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.675973 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676077 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.676050945 +0000 UTC m=+96.634287114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676171 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.676141777 +0000 UTC m=+96.634377936 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676361 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676544 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676606 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676638 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676546 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.676505727 +0000 UTC m=+96.634741886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.676776 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.676747763 +0000 UTC m=+96.634984112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.677224 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.677432 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.677607 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.677865 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.677835522 +0000 UTC m=+96.636071851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.769532 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.769683 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.769706 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.769874 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.769900 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.777061 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.777399 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.777576 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:13:54.777512857 +0000 UTC m=+96.735749166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.871495 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.871497 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.871736 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.871869 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.871962 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:46 crc kubenswrapper[5110]: E0130 00:13:46.872061 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.874103 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.874191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.874215 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.874242 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.874296 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.978120 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.978697 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.978909 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.979112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:46 crc kubenswrapper[5110]: I0130 00:13:46.979277 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:46Z","lastTransitionTime":"2026-01-30T00:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.082855 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.082940 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.082962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.082993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.083013 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.185872 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.185957 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.185977 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.186009 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.186031 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.290155 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.290610 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.290781 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.290913 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.291025 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.393908 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.394046 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.394068 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.394492 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.394520 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.497765 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.497865 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.497895 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.497926 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.497950 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.600710 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.600775 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.600789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.600807 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.600822 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.703943 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.704075 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.704095 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.704119 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.704137 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.807085 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.807167 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.807187 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.807213 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.807233 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.872473 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:47 crc kubenswrapper[5110]: E0130 00:13:47.872723 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.910029 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.910106 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.910127 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.910155 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:47 crc kubenswrapper[5110]: I0130 00:13:47.910178 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:47Z","lastTransitionTime":"2026-01-30T00:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.013022 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.013094 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.013113 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.013140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.013161 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.014318 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.014381 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.014391 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.014408 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.014417 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.029752 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.034297 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.034403 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.034425 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.034452 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.034470 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.046363 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.052402 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.052487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.052508 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.052540 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.052562 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.069534 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.074769 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.074814 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.074828 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.074847 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.074862 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.090554 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.095793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.095866 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.095890 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.095917 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.095937 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.112070 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f599707-daad-4f90-b3eb-35dae3554a65\\\",\\\"systemUUID\\\":\\\"c89c8d44-7a3d-4e2c-9c1d-0c2f332b8db4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.112322 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.115742 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.115830 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.115851 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.115887 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.115911 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.218889 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.218980 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.218998 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.219027 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.219046 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.322147 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.322236 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.322266 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.322304 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.322329 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.425015 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.425098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.425152 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.425200 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.425226 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.528715 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.528812 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.528854 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.528890 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.528910 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.632101 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.632190 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.632210 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.632246 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.632263 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.742685 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.742807 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.742834 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.742877 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.742914 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.846637 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.846762 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.846793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.846828 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.846858 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.872302 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.872415 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.872652 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.872883 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.873115 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:48 crc kubenswrapper[5110]: E0130 00:13:48.873542 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.892690 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.906612 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.919604 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.930190 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950120 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950666 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950690 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950145 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950722 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.950746 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:48Z","lastTransitionTime":"2026-01-30T00:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.970661 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:48 crc kubenswrapper[5110]: I0130 00:13:48.987094 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.004445 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.019259 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.037190 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.048481 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.053936 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.054172 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.054378 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.054596 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.054771 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.074545 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.104469 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.119156 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.131920 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.150265 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.157850 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.157926 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.157948 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.157977 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.157997 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.169919 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.182697 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.196764 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.260991 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.261063 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.261084 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.261115 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.261137 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.364150 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.364210 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.364223 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.364243 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.364254 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.467787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.467869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.467891 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.467924 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.467948 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.570884 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.571132 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.571412 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.571651 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.571787 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.675309 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.675590 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.675687 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.675757 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.675840 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.779241 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.779809 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.779881 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.779946 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.780011 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.871860 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:49 crc kubenswrapper[5110]: E0130 00:13:49.872956 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.873313 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:13:49 crc kubenswrapper[5110]: E0130 00:13:49.873631 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.882791 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.882875 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.882898 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.882931 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.882955 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.984534 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.984573 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.984582 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.984596 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:49 crc kubenswrapper[5110]: I0130 00:13:49.984604 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:49Z","lastTransitionTime":"2026-01-30T00:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.087056 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.087091 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.087099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.087114 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.087123 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.189620 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.189901 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.190140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.190408 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.190639 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.293317 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.293454 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.293482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.293516 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.293540 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.395933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.396018 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.396040 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.396071 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.396093 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.399237 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.498711 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.498789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.498809 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.498839 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.498859 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.601807 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.601882 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.601900 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.601925 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.601944 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.704905 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.705048 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.705072 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.705103 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.705122 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.807363 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.807446 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.807467 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.807498 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.807518 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.871830 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.872073 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.872498 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:50 crc kubenswrapper[5110]: E0130 00:13:50.872746 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:50 crc kubenswrapper[5110]: E0130 00:13:50.872853 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:50 crc kubenswrapper[5110]: E0130 00:13:50.873071 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.911617 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.911679 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.911700 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.911753 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:50 crc kubenswrapper[5110]: I0130 00:13:50.911777 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:50Z","lastTransitionTime":"2026-01-30T00:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.015196 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.015297 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.015324 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.015403 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.015436 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.120372 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.120450 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.120466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.120750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.120805 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.223053 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.223110 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.223128 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.223152 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.223169 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.325417 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.325473 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.325485 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.325504 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.325517 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.391553 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerStarted","Data":"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.391619 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerStarted","Data":"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.393185 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pll77" event={"ID":"0bf0b3ab-206c-49bb-a5bd-f177b968c344","Type":"ContainerStarted","Data":"3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.408323 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.420104 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.428099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.428150 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.428162 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.428209 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.428219 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.435914 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.452806 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.470274 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.485142 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.498663 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.514920 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.531680 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.531767 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.531793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.531831 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.531857 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.534953 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.554380 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.572837 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.586026 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.604038 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.615367 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.634863 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.634912 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.634924 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.634941 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.634952 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.643695 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.668806 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.684152 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.696895 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.713538 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.735129 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.738198 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.738279 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.738306 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.738364 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.738391 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.752859 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.764064 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.773948 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.789968 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.804732 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.818086 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.833043 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.841662 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.841741 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.841766 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.841797 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.841825 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.850452 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.869316 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.871576 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:51 crc kubenswrapper[5110]: E0130 00:13:51.871788 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.884580 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.903139 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.919191 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.939892 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.944868 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.944947 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.944967 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.944997 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.945018 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:51Z","lastTransitionTime":"2026-01-30T00:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.953169 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5110]: I0130 00:13:51.978391 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.004986 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.018655 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.033167 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.049105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.049191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.049220 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.049259 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.049289 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.151532 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.151587 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.151598 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.151614 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.151625 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.254258 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.254365 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.254392 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.254426 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.254448 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.357565 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.357655 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.357678 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.357712 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.357735 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.398215 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kz9zz" event={"ID":"7f9b7ad1-23e2-4a81-a158-29a14e73eed5","Type":"ContainerStarted","Data":"9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.424478 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.451728 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.460317 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.460470 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.460503 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.460540 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.460566 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.474089 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.488772 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.500018 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.519745 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.531469 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.554978 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.562996 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.563061 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.563080 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.563107 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.563129 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.584740 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.598134 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.613011 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.629886 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.646318 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.658823 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.666205 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.666245 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.666257 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.666275 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.666287 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.674732 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.690247 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.705936 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.721317 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.736468 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.768627 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.768702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.768724 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.768752 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.768772 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.871483 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.871578 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872061 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872113 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872169 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: E0130 00:13:52.872100 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872191 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.872504 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:52 crc kubenswrapper[5110]: E0130 00:13:52.872701 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:52 crc kubenswrapper[5110]: E0130 00:13:52.872828 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.975987 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.976083 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.976105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.976137 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:52 crc kubenswrapper[5110]: I0130 00:13:52.976158 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:52Z","lastTransitionTime":"2026-01-30T00:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.079017 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.079076 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.079096 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.079118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.079135 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.181847 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.181911 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.181927 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.181950 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.181965 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.285224 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.285275 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.285286 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.285305 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.285315 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.389097 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.389180 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.389204 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.389237 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.389262 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.404265 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerStarted","Data":"aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.406856 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1" exitCode=0 Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.407003 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.421363 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.451042 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.466250 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.492806 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.492883 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.492906 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.492933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.492951 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.493814 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.520196 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.531467 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.544904 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.563413 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.588306 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.595953 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.595999 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.596014 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.596033 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.596049 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.598485 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.613640 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.633112 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.646651 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.662908 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.676467 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.689981 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.699188 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.699237 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.699251 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.699275 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.699289 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.731783 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.747230 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.763023 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.778716 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.794020 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.801173 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.801218 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.801290 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.801309 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.801321 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.808328 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.821664 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.833494 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.850468 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.872181 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:53 crc kubenswrapper[5110]: E0130 00:13:53.872445 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.874416 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.888444 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.899500 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.906496 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.906628 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.906702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.906733 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.906809 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:53Z","lastTransitionTime":"2026-01-30T00:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.913285 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.928259 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.938230 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.954188 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.972495 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5110]: I0130 00:13:53.989635 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.006017 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.010881 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.010920 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.010932 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.010953 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.010965 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.017857 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.030035 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.055684 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.113181 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.113272 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.113301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.113373 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.113407 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.216686 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.216751 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.216768 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.216789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.216805 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.321276 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.321380 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.321403 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.321431 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.321450 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.413294 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6" exitCode=0 Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.413434 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421315 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421502 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"7460b800f35d430074709dfbb44364da98d88ad1209ee300d7bfd6c403e65a68"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421530 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"344c3fae88cab9b3c695182ba5b3125c4bb651be76736410791242a9efc51abb"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421588 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"0e7744345e7a304226006eddb988fdac7f93b2ffc2d953da5266ab7f9f8b2983"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421623 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"14af1f5b9ba102050657728a106d998d5185fc102e772b9ddf9b7f98af2914c2"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.421641 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"531d80f4432da2b8b09a05cf156a5afde04c2d29f2e77a15f3d8134940cb21b5"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.426952 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.427024 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.427046 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.427075 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.427097 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.427930 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v6j88" event={"ID":"f47cb22d-f09e-43a7-95e0-0e1008827f08","Type":"ContainerStarted","Data":"f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.428120 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.443162 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.461554 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.478758 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.494628 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.514793 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.529628 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.529708 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.529729 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.529755 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.529774 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.536505 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.548502 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.595194 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.616678 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.643864 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.643910 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.643922 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.643938 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.643952 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.653452 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.679783 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.680477 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.680567 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.680609 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.680655 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.680688 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.680941 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.680953 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.680980 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681017 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681031 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.681007647 +0000 UTC m=+112.639243776 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681057 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.681047788 +0000 UTC m=+112.639283907 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681033 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.680994 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681278 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681299 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.681238303 +0000 UTC m=+112.639474622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.680784 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681408 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.681374157 +0000 UTC m=+112.639610466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.681435 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.681425938 +0000 UTC m=+112.639662247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.689823 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.698832 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.709315 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.719061 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.726663 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.737438 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748502 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748555 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748575 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748601 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748614 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.748984 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.765140 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.778169 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.781650 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.781877 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.781966 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs podName:1fbd252e-c54f-4a19-b637-adb4d23722fc nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.781945015 +0000 UTC m=+112.740181154 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs") pod "network-metrics-daemon-vwf28" (UID: "1fbd252e-c54f-4a19-b637-adb4d23722fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.789817 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.802237 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.814451 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.825810 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.834857 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.846036 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.850795 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.850832 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.850843 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.850860 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.850871 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.859864 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.871902 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.871988 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.872460 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.873073 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.872808 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:54 crc kubenswrapper[5110]: E0130 00:13:54.873390 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.879797 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.898111 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.912886 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.927494 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.946314 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.958030 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.958070 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.958082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.958096 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.958107 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:54Z","lastTransitionTime":"2026-01-30T00:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.959191 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5110]: I0130 00:13:54.982228 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.008792 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.022713 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.037885 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.060438 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.060479 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.060493 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.060515 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.060529 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.163578 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.163636 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.163649 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.163670 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.163684 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.266273 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.266323 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.266424 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.266458 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.266480 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.369091 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.369209 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.369232 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.369269 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.369290 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.435324 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="84b7ab2125db15dc41aa0fd3857703f2bdfdf29ea3981f47f1674576904d9257" exitCode=0 Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.435436 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"84b7ab2125db15dc41aa0fd3857703f2bdfdf29ea3981f47f1674576904d9257"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.437630 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"b4969bd49f432a298435f0dc24dd0580cfce5b2f4a32021e7822fd8c2ab2fd12"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.443043 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"d16f1b182b999a118ff5758d8ec721762cde4a9b891f9c3d5e838694f2ddbc57"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.443112 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.446293 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"037e4beb35758b4076a41597fab75fcf7091c684d7721ab9230061b024079f69"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.449409 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"404b253d89f0aa6477cb4a7f83731d4938588cc071558fd808fe4a807a8da4ef"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.449497 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"301ea016af394d42f82328d23f45a55bc4576742fd9da9b59ffd002483ac5f62"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.455083 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.472917 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.474318 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.474366 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.474376 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.474394 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.474407 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.485398 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kz9zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f9b7ad1-23e2-4a81-a158-29a14e73eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f3369627a0efaf4076810cdbbc0e2445a98e6c4dd6594503db72db1794e3709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz286\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kz9zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.499198 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97dc714a-5d84-4c81-99ef-13067437fcad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt6lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t6dv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.517187 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.537010 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.555484 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.576178 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.577823 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.577888 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.577917 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.577973 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.578007 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.595127 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v6j88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f47cb22d-f09e-43a7-95e0-0e1008827f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8kcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v6j88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.611852 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"560f3d2b-f6b8-42cd-9a6a-2c141c780302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad9a4e4016104dd3f6e63baa1c2d339b45186a6fda449c36a67d987c24549c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b7ab2125db15dc41aa0fd3857703f2bdfdf29ea3981f47f1674576904d9257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b7ab2125db15dc41aa0fd3857703f2bdfdf29ea3981f47f1674576904d9257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjsx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jf6rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.628999 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f524c6b3-97c8-45bc-943c-33470be28927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c9e68357d7e8460b2543ee785d4ea5d09db2bd6c09797e28dbc3c1ca852be78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c5c150dbc4db01f89113f1d340e667d152180c7e8e9325ff1af14938da5b745\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b5b848865b13719cad3afc90adf7a62b455729bf2c31064115092d020cfbac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.647245 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348d217f-a5d4-4cea-aa73-51dc123257df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a6f985e0dee53b2f7df64d592e03be073bed0d029716eaa1a645471fe89b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://88fbcaec4bbc60735e32f0373cbd7595dfce5a6fb939b29fcf9759c4d2bf8038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068ec55f572c303f249f1370ffd5ef2e37332374f70bd0a82341371ebc88d10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ec95c083fd224bb41848e7328e8bf77b80c37919416c03f8168956856042c85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.661095 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65545be2-9d3b-4fd2-88db-3c6eae7340d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://033e49f8c22656c2fbbf5f4bb4dc9768bd8322559c78b2f1aa1f5c1966251bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dd602ac37c280166f39dbb6e5d23edd157a553c8a00fb4eb5b2a8ba37dffb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683094 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683154 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683167 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683190 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683206 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.683313 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c87c5f-e016-42c1-8e6a-36e478fe2592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:13:24Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:13:24.600197 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:13:24.600372 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:13:24.601167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3174828304/tls.crt::/tmp/serving-cert-3174828304/tls.key\\\\\\\"\\\\nI0130 00:13:24.815315 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:13:24.817404 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:13:24.817422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:13:24.817449 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:13:24.817459 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:13:24.822254 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:13:24.822273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822278 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:13:24.822282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:13:24.822286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:13:24.822290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:13:24.822293 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 00:13:24.822311 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 00:13:24.825063 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.696278 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-pll77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf0b3ab-206c-49bb-a5bd-f177b968c344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c6fcf2c76fda8ab32d0cc71d189e7ff01b98bea5901cdae73d1fd37a1fe9400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m2zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pll77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.716012 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89a63cd7-c2e9-4666-a363-aa6f67187756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:13:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7sdgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xdrfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.737677 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396a4508-4df8-41b3-899f-0b26221cca40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dd9c16cc7a68958ed10c778cf797cee648cf16d51926d136601b3ab9f8896ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://200710ea375ce8b949ae604abb70e6fd5274c87c743aca8b049d3e76562081f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c8d6cdf4e83721d45c697f9a20b72ed4de61edfe305995a3ec67f8e1afa2a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad85501347e6e0836afb58c023cbd0a48703b1b8bdcf70ab38cd3465bdf46251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ef917af920550f1369e07165498bdb1a229b8f279e36290b948f5a447acc1b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bbc0d9e0de1e998b43512bc44e46bc5a9691b1c11ac3c909aececa0c4eaf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b41cbd0363038f7aa0bfe3a6b9e599e978d63f940fe1f1f9ced7dfc04868687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3363836d6f3f84dc63d6b723f37d9b7ec68593fb0498d5d2e3f1befdc6ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.749321 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwf28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fbd252e-c54f-4a19-b637-adb4d23722fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-697d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwf28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.761531 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:13:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rc9v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:13:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xfqbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.786931 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.787021 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.787046 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.787097 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.787126 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.871619 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v6j88" podStartSLOduration=76.871580414 podStartE2EDuration="1m16.871580414s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:55.871256116 +0000 UTC m=+97.829492285" watchObservedRunningTime="2026-01-30 00:13:55.871580414 +0000 UTC m=+97.829816583" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.871633 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:55 crc kubenswrapper[5110]: E0130 00:13:55.872279 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.890646 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.890721 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.890743 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.890768 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.890783 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.921037 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.921010151 podStartE2EDuration="17.921010151s" podCreationTimestamp="2026-01-30 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:55.920851577 +0000 UTC m=+97.879087716" watchObservedRunningTime="2026-01-30 00:13:55.921010151 +0000 UTC m=+97.879246280" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.939625 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.939585438 podStartE2EDuration="17.939585438s" podCreationTimestamp="2026-01-30 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:55.939041594 +0000 UTC m=+97.897277813" watchObservedRunningTime="2026-01-30 00:13:55.939585438 +0000 UTC m=+97.897821607" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.954218 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.954188301 podStartE2EDuration="17.954188301s" podCreationTimestamp="2026-01-30 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:55.953236116 +0000 UTC m=+97.911472255" watchObservedRunningTime="2026-01-30 00:13:55.954188301 +0000 UTC m=+97.912424460" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.993756 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.993821 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.993836 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.993866 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.993885 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:55Z","lastTransitionTime":"2026-01-30T00:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:55 crc kubenswrapper[5110]: I0130 00:13:55.996772 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pll77" podStartSLOduration=76.996744618 podStartE2EDuration="1m16.996744618s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:55.996136432 +0000 UTC m=+97.954372601" watchObservedRunningTime="2026-01-30 00:13:55.996744618 +0000 UTC m=+97.954980767" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.064043 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.064009053 podStartE2EDuration="18.064009053s" podCreationTimestamp="2026-01-30 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:56.061898597 +0000 UTC m=+98.020134746" watchObservedRunningTime="2026-01-30 00:13:56.064009053 +0000 UTC m=+98.022245212" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.099359 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.107376 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.107409 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.107433 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.107449 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.112875 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" podStartSLOduration=76.112850284 podStartE2EDuration="1m16.112850284s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:56.112033233 +0000 UTC m=+98.070269362" watchObservedRunningTime="2026-01-30 00:13:56.112850284 +0000 UTC m=+98.071086443" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.209585 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.209637 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.209652 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.209668 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.209680 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.245662 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kz9zz" podStartSLOduration=77.245635968 podStartE2EDuration="1m17.245635968s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:56.203889093 +0000 UTC m=+98.162125222" watchObservedRunningTime="2026-01-30 00:13:56.245635968 +0000 UTC m=+98.203872097" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.246166 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podStartSLOduration=77.246163122 podStartE2EDuration="1m17.246163122s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:56.245005962 +0000 UTC m=+98.203242111" watchObservedRunningTime="2026-01-30 00:13:56.246163122 +0000 UTC m=+98.204399251" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.313791 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.313842 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.313851 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.313869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.313879 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.416029 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.416083 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.416098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.416121 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.416140 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.454951 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="c89c04e1806ae5ae266a4f94e87cb103a11d62d49f25fca0cb023435e8d7a182" exitCode=0 Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.455028 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"c89c04e1806ae5ae266a4f94e87cb103a11d62d49f25fca0cb023435e8d7a182"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.518885 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.518953 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.518971 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.518993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.519008 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.626544 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.626618 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.626640 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.626674 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.626696 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.729384 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.729433 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.729444 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.729462 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.729487 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.832586 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.832663 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.832684 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.832713 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.832736 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.872298 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.872398 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:56 crc kubenswrapper[5110]: E0130 00:13:56.872954 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:56 crc kubenswrapper[5110]: E0130 00:13:56.873179 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.872439 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:56 crc kubenswrapper[5110]: E0130 00:13:56.873299 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.935894 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.935968 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.935988 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.936016 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:56 crc kubenswrapper[5110]: I0130 00:13:56.936038 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:56Z","lastTransitionTime":"2026-01-30T00:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.038777 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.038836 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.038852 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.038873 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.038883 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.142096 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.142156 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.142171 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.142194 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.142208 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.244993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.245256 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.245324 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.245431 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.245512 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.348351 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.348390 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.348401 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.348416 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.348427 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.450593 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.450628 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.450638 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.450654 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.450663 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.461611 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="374f51fabada8d11f7783b3321751ae6ee1604ac634ea886e2c1c9bba1c23588" exitCode=0 Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.461680 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"374f51fabada8d11f7783b3321751ae6ee1604ac634ea886e2c1c9bba1c23588"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.469209 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.554638 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.554710 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.554730 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.554757 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.554776 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.657817 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.657896 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.657917 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.657947 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.657991 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.761789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.761860 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.761878 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.761905 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.761925 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.865212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.865315 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.865396 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.865441 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.865469 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.871931 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:57 crc kubenswrapper[5110]: E0130 00:13:57.872169 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.968834 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.968916 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.968937 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.968969 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:57 crc kubenswrapper[5110]: I0130 00:13:57.968990 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:57Z","lastTransitionTime":"2026-01-30T00:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.071938 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.071994 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.072005 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.072023 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.072036 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:58Z","lastTransitionTime":"2026-01-30T00:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.174999 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.175673 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.175692 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.175721 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.175737 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:58Z","lastTransitionTime":"2026-01-30T00:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.279459 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.279542 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.279566 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.279595 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.279616 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:58Z","lastTransitionTime":"2026-01-30T00:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.289791 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.289845 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.289869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.289892 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.289910 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:13:58Z","lastTransitionTime":"2026-01-30T00:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.359638 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx"] Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.370995 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.375163 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.375475 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.375817 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.376067 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.431183 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311cc27e-e0c3-432a-809b-33fdb80e189c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.431325 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/311cc27e-e0c3-432a-809b-33fdb80e189c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.431403 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.431465 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.431536 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/311cc27e-e0c3-432a-809b-33fdb80e189c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.480324 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerStarted","Data":"952de74651e8195eaf647c226fa4cdfee991d9ee3f59da0089a5e6a8f366fd04"} Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533103 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/311cc27e-e0c3-432a-809b-33fdb80e189c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533249 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311cc27e-e0c3-432a-809b-33fdb80e189c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533311 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/311cc27e-e0c3-432a-809b-33fdb80e189c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533396 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533457 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533592 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.533888 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/311cc27e-e0c3-432a-809b-33fdb80e189c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.536565 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/311cc27e-e0c3-432a-809b-33fdb80e189c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.548867 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311cc27e-e0c3-432a-809b-33fdb80e189c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.560199 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/311cc27e-e0c3-432a-809b-33fdb80e189c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-t79kx\" (UID: \"311cc27e-e0c3-432a-809b-33fdb80e189c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.694146 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" Jan 30 00:13:58 crc kubenswrapper[5110]: W0130 00:13:58.717882 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311cc27e_e0c3_432a_809b_33fdb80e189c.slice/crio-4d2f0f09f61cbb973173fe7043fd8f2031a483b4e8071b102e1a1d52e44c7d34 WatchSource:0}: Error finding container 4d2f0f09f61cbb973173fe7043fd8f2031a483b4e8071b102e1a1d52e44c7d34: Status 404 returned error can't find the container with id 4d2f0f09f61cbb973173fe7043fd8f2031a483b4e8071b102e1a1d52e44c7d34 Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.846525 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.859892 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.874203 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.874385 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:13:58 crc kubenswrapper[5110]: I0130 00:13:58.874443 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:13:58 crc kubenswrapper[5110]: E0130 00:13:58.874474 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:13:58 crc kubenswrapper[5110]: E0130 00:13:58.874565 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:13:58 crc kubenswrapper[5110]: E0130 00:13:58.874758 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.489405 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="952de74651e8195eaf647c226fa4cdfee991d9ee3f59da0089a5e6a8f366fd04" exitCode=0 Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.489521 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"952de74651e8195eaf647c226fa4cdfee991d9ee3f59da0089a5e6a8f366fd04"} Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.493088 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" event={"ID":"311cc27e-e0c3-432a-809b-33fdb80e189c","Type":"ContainerStarted","Data":"94d1c693a87067fa502860f4efdec9999ff420fc40ac51c95bbe07f073e29d51"} Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.493199 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" event={"ID":"311cc27e-e0c3-432a-809b-33fdb80e189c","Type":"ContainerStarted","Data":"4d2f0f09f61cbb973173fe7043fd8f2031a483b4e8071b102e1a1d52e44c7d34"} Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.502670 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerStarted","Data":"a8fbf8c2a126adc08588bc73603a0c7c14c966eea5a4489d3a1a47e87251e041"} Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.591475 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.591726 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.591758 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.631434 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-t79kx" podStartSLOduration=79.631399799 podStartE2EDuration="1m19.631399799s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:59.558879016 +0000 UTC m=+101.517115155" watchObservedRunningTime="2026-01-30 00:13:59.631399799 +0000 UTC m=+101.589635968" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.637173 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.639179 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.672450 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podStartSLOduration=79.672429335 podStartE2EDuration="1m19.672429335s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:59.631998714 +0000 UTC m=+101.590234843" watchObservedRunningTime="2026-01-30 00:13:59.672429335 +0000 UTC m=+101.630665464" Jan 30 00:13:59 crc kubenswrapper[5110]: I0130 00:13:59.871976 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:59 crc kubenswrapper[5110]: E0130 00:13:59.872254 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:14:00 crc kubenswrapper[5110]: I0130 00:14:00.514835 5110 generic.go:358] "Generic (PLEG): container finished" podID="560f3d2b-f6b8-42cd-9a6a-2c141c780302" containerID="55a50b0ec63c4f21d6fc87b0b560abe73c2d7404a4a819e13a613c437ab7db32" exitCode=0 Jan 30 00:14:00 crc kubenswrapper[5110]: I0130 00:14:00.514938 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerDied","Data":"55a50b0ec63c4f21d6fc87b0b560abe73c2d7404a4a819e13a613c437ab7db32"} Jan 30 00:14:00 crc kubenswrapper[5110]: I0130 00:14:00.873804 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:00 crc kubenswrapper[5110]: E0130 00:14:00.873922 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:14:00 crc kubenswrapper[5110]: I0130 00:14:00.873993 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:00 crc kubenswrapper[5110]: I0130 00:14:00.874243 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:00 crc kubenswrapper[5110]: E0130 00:14:00.874304 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:14:00 crc kubenswrapper[5110]: E0130 00:14:00.874364 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:14:01 crc kubenswrapper[5110]: I0130 00:14:01.522744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" event={"ID":"560f3d2b-f6b8-42cd-9a6a-2c141c780302","Type":"ContainerStarted","Data":"fe94874fd091a10feb602da2256298f29a18e4efdee52c05473be5d5b7b5e9a5"} Jan 30 00:14:01 crc kubenswrapper[5110]: I0130 00:14:01.872033 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:01 crc kubenswrapper[5110]: E0130 00:14:01.873300 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:14:01 crc kubenswrapper[5110]: I0130 00:14:01.923620 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jf6rt" podStartSLOduration=82.923593048 podStartE2EDuration="1m22.923593048s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:01.551102625 +0000 UTC m=+103.509338764" watchObservedRunningTime="2026-01-30 00:14:01.923593048 +0000 UTC m=+103.881829207" Jan 30 00:14:01 crc kubenswrapper[5110]: I0130 00:14:01.923999 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vwf28"] Jan 30 00:14:01 crc kubenswrapper[5110]: I0130 00:14:01.924247 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:01 crc kubenswrapper[5110]: E0130 00:14:01.924446 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:14:02 crc kubenswrapper[5110]: I0130 00:14:02.872408 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:02 crc kubenswrapper[5110]: I0130 00:14:02.872512 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:02 crc kubenswrapper[5110]: E0130 00:14:02.872641 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:14:02 crc kubenswrapper[5110]: E0130 00:14:02.873368 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:14:02 crc kubenswrapper[5110]: I0130 00:14:02.873751 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:14:02 crc kubenswrapper[5110]: E0130 00:14:02.874241 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:14:03 crc kubenswrapper[5110]: I0130 00:14:03.871973 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:03 crc kubenswrapper[5110]: E0130 00:14:03.872352 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:14:03 crc kubenswrapper[5110]: I0130 00:14:03.872634 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:03 crc kubenswrapper[5110]: E0130 00:14:03.873759 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:14:04 crc kubenswrapper[5110]: I0130 00:14:04.871920 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:04 crc kubenswrapper[5110]: I0130 00:14:04.872041 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:04 crc kubenswrapper[5110]: E0130 00:14:04.872164 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:14:04 crc kubenswrapper[5110]: E0130 00:14:04.872707 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:14:05 crc kubenswrapper[5110]: I0130 00:14:05.871835 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:05 crc kubenswrapper[5110]: E0130 00:14:05.872073 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwf28" podUID="1fbd252e-c54f-4a19-b637-adb4d23722fc" Jan 30 00:14:05 crc kubenswrapper[5110]: I0130 00:14:05.872113 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:05 crc kubenswrapper[5110]: E0130 00:14:05.872403 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.572174 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.572539 5110 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.629159 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7vndv"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.646653 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-62799"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.646954 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.650851 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.651253 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.654163 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.654674 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.660157 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.662518 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.665975 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.670732 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.670778 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.671381 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.671766 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.671939 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.672261 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.672459 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.672687 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.678179 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.678996 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.692976 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.701180 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.701412 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.704871 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.705693 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.705949 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.706279 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.706400 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.706498 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.706591 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.707015 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708349 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708478 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-bvwvj"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708586 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708640 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708660 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708781 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708802 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.708976 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.709145 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.711928 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-r2msw"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.712257 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.716056 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.716205 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.716684 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.717766 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.718090 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.718221 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.718760 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.720198 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.720608 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.721120 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.722717 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.724034 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-q9fd8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.724624 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.725721 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.725898 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.726036 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.726054 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.726166 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.726484 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.726832 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.727097 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.727439 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.730422 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.731087 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.731122 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.731428 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.731757 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.732175 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-tkt8c"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.732514 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.733385 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.733531 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.734430 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.735490 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29495520-r6lp4"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.735892 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.738261 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.739314 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.740587 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.754580 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.754624 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755191 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-policies\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755260 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-encryption-config\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755295 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-serving-ca\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755317 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78e2f71e-f453-40aa-adf0-7a47d52731c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755369 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755396 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnt2\" (UniqueName: \"kubernetes.io/projected/a2642451-0e0a-4ffb-9356-e7d67106f912-kube-api-access-fxnt2\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755441 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755465 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755485 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755506 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-dir\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755536 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ada32307-a77a-45ee-8310-40d64876b14c-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755559 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755575 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtb8q\" (UniqueName: \"kubernetes.io/projected/ada32307-a77a-45ee-8310-40d64876b14c-kube-api-access-qtb8q\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755618 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8tv\" (UniqueName: \"kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755640 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhnk\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-kube-api-access-xmhnk\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755673 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7vc\" (UniqueName: \"kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755708 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755729 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78e2f71e-f453-40aa-adf0-7a47d52731c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755762 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755796 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755820 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-serving-cert\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755847 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-config\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755874 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755892 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-client\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755927 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-images\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755952 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-trusted-ca-bundle\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.755971 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.757583 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.759903 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.761096 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.762423 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.765811 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.766355 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.767878 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.768294 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.768750 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.775078 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.778125 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.787305 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.787633 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.788837 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.788880 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.788970 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.789277 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.810865 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.811219 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.811790 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.811820 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.811897 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.811974 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812065 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812141 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812165 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812302 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812407 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812460 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812533 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812555 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.812689 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.813361 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.814694 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.815403 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.817624 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.821525 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.822037 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.824287 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.824596 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.825411 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.825652 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.826942 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.827130 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.827643 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.829979 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.830634 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.830787 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.831106 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.832546 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.833589 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.833954 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.834216 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.834530 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.834803 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835044 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835204 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835372 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835372 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835501 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835623 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835627 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.835815 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.842289 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.842536 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.851877 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.853142 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.853448 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.856703 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-config\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.856740 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.856783 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-client\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857366 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-images\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857395 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-trusted-ca-bundle\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857424 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857471 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857511 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpqv\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857544 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857568 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857606 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-policies\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857628 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-encryption-config\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857652 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857678 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-serving-ca\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78e2f71e-f453-40aa-adf0-7a47d52731c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857717 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857742 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857768 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857796 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnt2\" (UniqueName: \"kubernetes.io/projected/a2642451-0e0a-4ffb-9356-e7d67106f912-kube-api-access-fxnt2\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857844 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857867 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857921 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857945 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-dir\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.857977 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ada32307-a77a-45ee-8310-40d64876b14c-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858004 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858022 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtb8q\" (UniqueName: \"kubernetes.io/projected/ada32307-a77a-45ee-8310-40d64876b14c-kube-api-access-qtb8q\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858070 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8tv\" (UniqueName: \"kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858093 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmhnk\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-kube-api-access-xmhnk\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858128 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dq7vc\" (UniqueName: \"kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858159 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858184 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78e2f71e-f453-40aa-adf0-7a47d52731c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858237 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858274 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858364 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858461 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858602 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-dir\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858712 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.858495 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-serving-cert\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.859248 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.859250 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-config\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: E0130 00:14:06.859620 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.359602902 +0000 UTC m=+109.317839031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.859751 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-serving-ca\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.860105 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ada32307-a77a-45ee-8310-40d64876b14c-images\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.860383 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.860664 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.861110 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-k8w5p"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.861228 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.861361 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.861570 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.862180 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-trusted-ca-bundle\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.862134 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78e2f71e-f453-40aa-adf0-7a47d52731c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.862177 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.862585 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2642451-0e0a-4ffb-9356-e7d67106f912-audit-policies\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.862655 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/78e2f71e-f453-40aa-adf0-7a47d52731c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.865214 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.865714 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-encryption-config\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.865901 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-etcd-client\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.866982 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ada32307-a77a-45ee-8310-40d64876b14c-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.867037 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.867433 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2642451-0e0a-4ffb-9356-e7d67106f912-serving-cert\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.867936 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78e2f71e-f453-40aa-adf0-7a47d52731c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.871073 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.872009 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.877237 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.883826 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.895078 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.895101 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.895125 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.895182 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.901461 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.901597 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.905348 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.905968 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.908790 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4qlhj"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.908926 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.909303 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.912009 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.912848 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.915317 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dvkqm"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.915425 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.922125 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.922244 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.929100 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.935497 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.935673 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.935939 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.940366 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.940464 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.943878 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.944153 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.949088 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7vndv"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.949124 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.949589 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.950009 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.954983 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.955194 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.958297 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-q4bkd"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.958574 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.959541 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:06 crc kubenswrapper[5110]: E0130 00:14:06.959759 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.459730579 +0000 UTC m=+109.417966698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.959886 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960020 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-serving-cert\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960098 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glnq\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-kube-api-access-9glnq\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960184 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-config\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960279 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960434 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-config\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960575 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960651 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce26ddbd-18c7-48f6-83bf-1124a0467647-config\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960722 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-metrics-tls\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960802 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjpcc\" (UniqueName: \"kubernetes.io/projected/1f225323-7f5a-46bf-a9a3-1093d025b0b7-kube-api-access-qjpcc\") pod \"downloads-747b44746d-k8w5p\" (UID: \"1f225323-7f5a-46bf-a9a3-1093d025b0b7\") " pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.960884 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: E0130 00:14:06.960903 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.460882429 +0000 UTC m=+109.419118558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961050 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961149 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961228 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-trusted-ca\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961303 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rq8b\" (UniqueName: \"kubernetes.io/projected/f7a48a07-3ab8-4b38-be60-baa4f39a0757-kube-api-access-6rq8b\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961407 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26ddbd-18c7-48f6-83bf-1124a0467647-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6q6\" (UniqueName: \"kubernetes.io/projected/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-kube-api-access-7c6q6\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961566 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlpqv\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961645 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961717 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961798 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961869 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-tmp-dir\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.961981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962058 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7a48a07-3ab8-4b38-be60-baa4f39a0757-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962150 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962239 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962313 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962424 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/959555fc-6a2d-4e6c-bc87-84864eeacb39-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962582 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962659 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962745 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962814 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28llz\" (UniqueName: \"kubernetes.io/projected/ce26ddbd-18c7-48f6-83bf-1124a0467647-kube-api-access-28llz\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.962952 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963128 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963157 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963201 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c50ad8-bd30-431a-80b1-290290cc1ea8-serving-cert\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963224 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c9c50ad8-bd30-431a-80b1-290290cc1ea8-available-featuregates\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963266 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963355 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/959555fc-6a2d-4e6c-bc87-84864eeacb39-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963380 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r89z\" (UniqueName: \"kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963400 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22vm\" (UniqueName: \"kubernetes.io/projected/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-kube-api-access-f22vm\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963450 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963469 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963487 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpvn\" (UniqueName: \"kubernetes.io/projected/c9c50ad8-bd30-431a-80b1-290290cc1ea8-kube-api-access-kvpvn\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963509 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963545 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963715 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963768 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963977 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.963985 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.964240 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.964279 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.964376 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.964562 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.966525 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.969729 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.970639 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.971500 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-t2qff"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.971684 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974322 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974364 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974383 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-62799"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974394 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974406 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-k8w5p"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974418 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974431 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-tkt8c"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974442 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974453 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-r6lp4"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974464 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974474 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-q9fd8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974484 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.974495 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2dmcg"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.975064 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979717 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-r2msw"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979748 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979765 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979777 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979789 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979799 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979810 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979820 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.979833 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vvz9f"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.982837 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tm4w8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.982986 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.983062 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987629 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987655 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987670 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4qlhj"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987684 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987701 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dvkqm"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987713 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-bvwvj"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987726 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987738 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987751 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9q6kw"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.987800 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.989198 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990409 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-t2qff"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990438 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2dmcg"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990451 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990462 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990474 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990484 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990495 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990506 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vvz9f"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990516 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990528 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tm4w8"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.990540 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zfs4d"] Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.991235 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:06 crc kubenswrapper[5110]: I0130 00:14:06.993595 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.028249 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.031006 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.049557 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065144 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065416 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-serving-cert\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065451 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9glnq\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-kube-api-access-9glnq\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-config\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065500 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-config\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065523 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065542 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce26ddbd-18c7-48f6-83bf-1124a0467647-config\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065559 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-metrics-tls\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065577 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjpcc\" (UniqueName: \"kubernetes.io/projected/1f225323-7f5a-46bf-a9a3-1093d025b0b7-kube-api-access-qjpcc\") pod \"downloads-747b44746d-k8w5p\" (UID: \"1f225323-7f5a-46bf-a9a3-1093d025b0b7\") " pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.065600 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.065740 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.565708928 +0000 UTC m=+109.523945057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.066462 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.066519 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-trusted-ca\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.066542 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rq8b\" (UniqueName: \"kubernetes.io/projected/f7a48a07-3ab8-4b38-be60-baa4f39a0757-kube-api-access-6rq8b\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.066565 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26ddbd-18c7-48f6-83bf-1124a0467647-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.066585 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6q6\" (UniqueName: \"kubernetes.io/projected/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-kube-api-access-7c6q6\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067141 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067196 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067194 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce26ddbd-18c7-48f6-83bf-1124a0467647-config\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067225 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067282 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-config\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.067381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068303 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-tmp-dir\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068390 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7a48a07-3ab8-4b38-be60-baa4f39a0757-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068420 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068442 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/959555fc-6a2d-4e6c-bc87-84864eeacb39-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068463 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068486 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068508 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068526 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28llz\" (UniqueName: \"kubernetes.io/projected/ce26ddbd-18c7-48f6-83bf-1124a0467647-kube-api-access-28llz\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068564 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068588 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068609 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c50ad8-bd30-431a-80b1-290290cc1ea8-serving-cert\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068665 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c9c50ad8-bd30-431a-80b1-290290cc1ea8-available-featuregates\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068683 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068717 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/959555fc-6a2d-4e6c-bc87-84864eeacb39-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068741 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6r89z\" (UniqueName: \"kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068760 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f22vm\" (UniqueName: \"kubernetes.io/projected/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-kube-api-access-f22vm\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068780 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068801 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068819 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068835 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpvn\" (UniqueName: \"kubernetes.io/projected/c9c50ad8-bd30-431a-80b1-290290cc1ea8-kube-api-access-kvpvn\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068855 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068890 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068908 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068925 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.068943 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069457 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069462 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069549 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-trusted-ca\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069659 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-tmp-dir\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069775 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.069964 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.070070 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-metrics-tls\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.070198 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.070307 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.070362 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.071246 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.071308 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.071456 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/959555fc-6a2d-4e6c-bc87-84864eeacb39-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.072402 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c9c50ad8-bd30-431a-80b1-290290cc1ea8-available-featuregates\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.073973 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.074168 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c50ad8-bd30-431a-80b1-290290cc1ea8-serving-cert\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.074884 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.074909 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26ddbd-18c7-48f6-83bf-1124a0467647-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.075061 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.075651 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.075917 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.075985 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.076178 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/959555fc-6a2d-4e6c-bc87-84864eeacb39-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.076524 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.077699 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.089878 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.090839 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-serving-cert\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.097896 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.111236 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.132703 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.151067 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.156512 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-config\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.170171 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.170569 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.171185 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.671160685 +0000 UTC m=+109.629396854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.189927 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.208874 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.229766 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.269468 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmhnk\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-kube-api-access-xmhnk\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.272396 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.272623 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.772581576 +0000 UTC m=+109.730817705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.273083 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.273729 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.773701435 +0000 UTC m=+109.731937594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.297304 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq7vc\" (UniqueName: \"kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc\") pod \"route-controller-manager-776cdc94d6-jwv7h\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.304025 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnt2\" (UniqueName: \"kubernetes.io/projected/a2642451-0e0a-4ffb-9356-e7d67106f912-kube-api-access-fxnt2\") pod \"apiserver-8596bd845d-62799\" (UID: \"a2642451-0e0a-4ffb-9356-e7d67106f912\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.336840 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78e2f71e-f453-40aa-adf0-7a47d52731c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fwqsr\" (UID: \"78e2f71e-f453-40aa-adf0-7a47d52731c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.344967 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8tv\" (UniqueName: \"kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv\") pod \"controller-manager-65b6cccf98-5p8zc\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.349638 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.363837 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7a48a07-3ab8-4b38-be60-baa4f39a0757-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.367484 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.370261 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.374149 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.374419 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.874380367 +0000 UTC m=+109.832616526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.375123 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.375789 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.875757283 +0000 UTC m=+109.833993452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.383469 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.390357 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.410722 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.450630 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.461879 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtb8q\" (UniqueName: \"kubernetes.io/projected/ada32307-a77a-45ee-8310-40d64876b14c-kube-api-access-qtb8q\") pod \"machine-api-operator-755bb95488-7vndv\" (UID: \"ada32307-a77a-45ee-8310-40d64876b14c\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.476117 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.476453 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.976413844 +0000 UTC m=+109.934650013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.476653 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.478147 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:07.978124929 +0000 UTC m=+109.936361068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.498134 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.509968 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.530574 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.551283 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.572481 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.572755 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.579118 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.579460 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.079434277 +0000 UTC m=+110.037670406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.591150 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.601621 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.610047 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.621471 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.633890 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.651600 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.668580 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr"] Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.673852 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.681212 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.681606 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.181585287 +0000 UTC m=+110.139821426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.717841 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.718007 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.718388 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.739798 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.750047 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.777097 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.782384 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.782802 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.282783762 +0000 UTC m=+110.241019891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.791938 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.810467 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.838602 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.849874 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7vndv"] Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.851088 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:14:07 crc kubenswrapper[5110]: W0130 00:14:07.866800 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada32307_a77a_45ee_8310_40d64876b14c.slice/crio-207d1eb284783858dcafeceda669747d6c5e941102601bb32352bf2e43a9eae2 WatchSource:0}: Error finding container 207d1eb284783858dcafeceda669747d6c5e941102601bb32352bf2e43a9eae2: Status 404 returned error can't find the container with id 207d1eb284783858dcafeceda669747d6c5e941102601bb32352bf2e43a9eae2 Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.871684 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.871724 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.872553 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.885852 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.886281 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.386264847 +0000 UTC m=+110.344500976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.889975 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.903794 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-62799"] Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.909687 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:14:07 crc kubenswrapper[5110]: W0130 00:14:07.921771 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2642451_0e0a_4ffb_9356_e7d67106f912.slice/crio-bc0dd1a8f8acce7a325489f0544148b685415d5425114a3f0b78251160aca298 WatchSource:0}: Error finding container bc0dd1a8f8acce7a325489f0544148b685415d5425114a3f0b78251160aca298: Status 404 returned error can't find the container with id bc0dd1a8f8acce7a325489f0544148b685415d5425114a3f0b78251160aca298 Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.928763 5110 request.go:752] "Waited before sending request" delay="1.015533563s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.930376 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.938290 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:14:07 crc kubenswrapper[5110]: W0130 00:14:07.948677 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65919fc8_e5a3_4a1b_9a55_59430b3a8394.slice/crio-acd24b9e39bc6639e86f4bf0e2c49ced473029f709fc8f956d5906d7c3207b0b WatchSource:0}: Error finding container acd24b9e39bc6639e86f4bf0e2c49ced473029f709fc8f956d5906d7c3207b0b: Status 404 returned error can't find the container with id acd24b9e39bc6639e86f4bf0e2c49ced473029f709fc8f956d5906d7c3207b0b Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.949686 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.969124 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.986558 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.986717 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.486683492 +0000 UTC m=+110.444919631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.986995 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:07 crc kubenswrapper[5110]: E0130 00:14:07.987448 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.487438361 +0000 UTC m=+110.445674500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:07 crc kubenswrapper[5110]: I0130 00:14:07.996757 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.009718 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.029802 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.050693 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.087936 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.088070 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.588040651 +0000 UTC m=+110.546276780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.088614 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.089248 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.589212112 +0000 UTC m=+110.547448231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.096506 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.110090 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.130934 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.149696 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.169365 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.189583 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.189733 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.189905 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.689857012 +0000 UTC m=+110.648093141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.190464 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.190864 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.690854108 +0000 UTC m=+110.649090237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.209768 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.229125 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.249520 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.268969 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.290822 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.291549 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.291852 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.791835538 +0000 UTC m=+110.750071667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.309520 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.329618 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.350114 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.369104 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.389296 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.392794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.393143 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.893128795 +0000 UTC m=+110.851364924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.409980 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.430518 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.475297 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.489997 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.492097 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlpqv\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.493844 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.494072 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:08.994057383 +0000 UTC m=+110.952293512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.511403 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.531232 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.550057 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.569907 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.589803 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.594706 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.595071 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.095057283 +0000 UTC m=+111.053293412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.595114 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" event={"ID":"78e2f71e-f453-40aa-adf0-7a47d52731c0","Type":"ContainerStarted","Data":"e28d353a1c92c3e4bd098e229ac738c8179fd05fd2a55e478f17aebc9c86fa46"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.595157 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" event={"ID":"78e2f71e-f453-40aa-adf0-7a47d52731c0","Type":"ContainerStarted","Data":"8c5a6c1f42ea2975973b5d1724d4d851afc25cec6d44bbf0a537670b16da1675"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.597728 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" event={"ID":"65919fc8-e5a3-4a1b-9a55-59430b3a8394","Type":"ContainerStarted","Data":"23b15306fc1c94d0cf2724bcf67908d3b4b7dda26cdee27e55c2badb19459ead"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.597786 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" event={"ID":"65919fc8-e5a3-4a1b-9a55-59430b3a8394","Type":"ContainerStarted","Data":"acd24b9e39bc6639e86f4bf0e2c49ced473029f709fc8f956d5906d7c3207b0b"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.598128 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.599447 5110 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-5p8zc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.599502 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.600163 5110 generic.go:358] "Generic (PLEG): container finished" podID="a2642451-0e0a-4ffb-9356-e7d67106f912" containerID="5f7f33061bf9d944605f7141419b19dc3da4627af8da33c7dc8070df037a8436" exitCode=0 Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.600264 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" event={"ID":"a2642451-0e0a-4ffb-9356-e7d67106f912","Type":"ContainerDied","Data":"5f7f33061bf9d944605f7141419b19dc3da4627af8da33c7dc8070df037a8436"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.600286 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" event={"ID":"a2642451-0e0a-4ffb-9356-e7d67106f912","Type":"ContainerStarted","Data":"bc0dd1a8f8acce7a325489f0544148b685415d5425114a3f0b78251160aca298"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.601579 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" event={"ID":"ada32307-a77a-45ee-8310-40d64876b14c","Type":"ContainerStarted","Data":"0c22377c91cb2004f81e3a430d2dee16c6e7725db0c662026808678b23bbf5e5"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.601620 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" event={"ID":"ada32307-a77a-45ee-8310-40d64876b14c","Type":"ContainerStarted","Data":"f73d44da0fc8542a2475328fc9ea1d46bd0ffd960a60f8d0e34f33c3f0cb3499"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.601636 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" event={"ID":"ada32307-a77a-45ee-8310-40d64876b14c","Type":"ContainerStarted","Data":"207d1eb284783858dcafeceda669747d6c5e941102601bb32352bf2e43a9eae2"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.603085 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" event={"ID":"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67","Type":"ContainerStarted","Data":"bd28bc52969fbd17e20f4d3aa641ca142f8e769d7e5885ba5e08ac929f88bd83"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.603111 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" event={"ID":"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67","Type":"ContainerStarted","Data":"7e753ca107495b8df50e94edf887d343aefe842b81d6ad680f3e9e3ee82d1b91"} Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.603364 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.609675 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.629617 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.650889 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.670246 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.690265 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.696480 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.696647 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.196621908 +0000 UTC m=+111.154858037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.696806 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.697164 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.197155152 +0000 UTC m=+111.155391281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.710256 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.730158 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.750267 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.770085 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.790672 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.798053 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.798395 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.298324747 +0000 UTC m=+111.256560906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.798514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.798923 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.298877521 +0000 UTC m=+111.257113670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.810438 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.830598 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.850149 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.870746 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.891289 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.901007 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:08 crc kubenswrapper[5110]: E0130 00:14:08.901356 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.401322719 +0000 UTC m=+111.359558848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.909369 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.931251 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.947323 5110 request.go:752] "Waited before sending request" delay="1.963596188s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.949944 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.969151 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:14:08 crc kubenswrapper[5110]: I0130 00:14:08.991168 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.002938 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.003281 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.503266044 +0000 UTC m=+111.461502173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.010874 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.029539 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.050003 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.070813 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.089683 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.104282 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.104486 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.604444628 +0000 UTC m=+111.562680787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.104995 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.105488 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.605464795 +0000 UTC m=+111.563700924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.150840 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjpcc\" (UniqueName: \"kubernetes.io/projected/1f225323-7f5a-46bf-a9a3-1093d025b0b7-kube-api-access-qjpcc\") pod \"downloads-747b44746d-k8w5p\" (UID: \"1f225323-7f5a-46bf-a9a3-1093d025b0b7\") " pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.165453 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glnq\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-kube-api-access-9glnq\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.172490 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6q6\" (UniqueName: \"kubernetes.io/projected/3eb577e7-5470-45ba-bdfe-2b19eeed6a43-kube-api-access-7c6q6\") pod \"console-operator-67c89758df-bvwvj\" (UID: \"3eb577e7-5470-45ba-bdfe-2b19eeed6a43\") " pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.199515 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rq8b\" (UniqueName: \"kubernetes.io/projected/f7a48a07-3ab8-4b38-be60-baa4f39a0757-kube-api-access-6rq8b\") pod \"cluster-samples-operator-6b564684c8-ct5gc\" (UID: \"f7a48a07-3ab8-4b38-be60-baa4f39a0757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.207607 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28llz\" (UniqueName: \"kubernetes.io/projected/ce26ddbd-18c7-48f6-83bf-1124a0467647-kube-api-access-28llz\") pod \"openshift-apiserver-operator-846cbfc458-kp4h8\" (UID: \"ce26ddbd-18c7-48f6-83bf-1124a0467647\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.208206 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.208372 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.708310273 +0000 UTC m=+111.666546432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.208624 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.208957 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.70894681 +0000 UTC m=+111.667182939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.238376 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18461f11-f1b2-43b3-b1c1-9fc3ee55283c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wjggp\" (UID: \"18461f11-f1b2-43b3-b1c1-9fc3ee55283c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.239893 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.249793 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.254581 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6606de8-e3c8-4a97-ae81-0b526c53fc1c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-lccls\" (UID: \"a6606de8-e3c8-4a97-ae81-0b526c53fc1c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.276320 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpvn\" (UniqueName: \"kubernetes.io/projected/c9c50ad8-bd30-431a-80b1-290290cc1ea8-kube-api-access-kvpvn\") pod \"openshift-config-operator-5777786469-tkt8c\" (UID: \"c9c50ad8-bd30-431a-80b1-290290cc1ea8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.292024 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/959555fc-6a2d-4e6c-bc87-84864eeacb39-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mxp6c\" (UID: \"959555fc-6a2d-4e6c-bc87-84864eeacb39\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.295645 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.298686 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.310233 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.310390 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.810359721 +0000 UTC m=+111.768595980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.310717 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.311226 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.811204793 +0000 UTC m=+111.769440932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.316935 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r89z\" (UniqueName: \"kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z\") pod \"oauth-openshift-66458b6674-vwh7r\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.329748 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22vm\" (UniqueName: \"kubernetes.io/projected/99ef4716-24ef-4a6b-9a9f-32a139d60aeb-kube-api-access-f22vm\") pod \"dns-operator-799b87ffcd-r2msw\" (UID: \"99ef4716-24ef-4a6b-9a9f-32a139d60aeb\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.331609 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.341860 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.345587 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.350548 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.364291 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.366669 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.371901 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.411684 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.411930 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-config\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.411957 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71fe9337-e0aa-4289-93b3-9aea0bdc284b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.411985 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412006 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33877551-c042-4fde-bf15-c4d58e9c3321-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412023 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-client\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412052 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94f97\" (UniqueName: \"kubernetes.io/projected/e30d7bf6-db40-4a33-9847-b27348b08821-kube-api-access-94f97\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412067 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412085 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdcm2\" (UniqueName: \"kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412102 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-config\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412118 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-trusted-ca-bundle\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412136 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdhv\" (UniqueName: \"kubernetes.io/projected/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-kube-api-access-krdhv\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412198 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-service-ca\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412217 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvvp\" (UniqueName: \"kubernetes.io/projected/1fcecaab-3109-4c05-a95a-2e78bf76b2df-kube-api-access-2zvvp\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412240 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412324 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33877551-c042-4fde-bf15-c4d58e9c3321-config\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412418 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412446 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412472 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-client\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412521 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65fkk\" (UniqueName: \"kubernetes.io/projected/71fe9337-e0aa-4289-93b3-9aea0bdc284b-kube-api-access-65fkk\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412541 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-image-import-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412564 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lzm\" (UniqueName: \"kubernetes.io/projected/8c0e8717-0f55-4a74-8ac6-086c3267e836-kube-api-access-x8lzm\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412587 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412672 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcecaab-3109-4c05-a95a-2e78bf76b2df-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412691 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-auth-proxy-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412719 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2ct\" (UniqueName: \"kubernetes.io/projected/669ca75b-a358-4dbb-a96c-ca95caffcfa1-kube-api-access-mv2ct\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.412776 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:09.912749857 +0000 UTC m=+111.870985986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412815 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa3816c0-0f04-4410-a56c-3602c754d5c0-machine-approver-tls\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412833 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-srv-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412871 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412890 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcecaab-3109-4c05-a95a-2e78bf76b2df-config\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412937 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7bp\" (UniqueName: \"kubernetes.io/projected/fa3816c0-0f04-4410-a56c-3602c754d5c0-kube-api-access-xk7bp\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412959 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-oauth-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.412980 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33877551-c042-4fde-bf15-c4d58e9c3321-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413002 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30d7bf6-db40-4a33-9847-b27348b08821-serving-cert\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413024 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a311df-cb31-4526-a1fc-3a58634d5dff-config\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413065 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-serving-cert\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413091 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-oauth-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413110 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413132 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413151 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413168 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7bd\" (UniqueName: \"kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413209 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzqj7\" (UniqueName: \"kubernetes.io/projected/33877551-c042-4fde-bf15-c4d58e9c3321-kube-api-access-qzqj7\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413396 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71fe9337-e0aa-4289-93b3-9aea0bdc284b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqxx\" (UniqueName: \"kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413487 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a311df-cb31-4526-a1fc-3a58634d5dff-serving-cert\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413544 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413564 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwm5\" (UniqueName: \"kubernetes.io/projected/b1823c5b-86dc-4bbf-8964-bc19dba82794-kube-api-access-spwm5\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413588 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413607 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-serving-cert\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413623 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c0e8717-0f55-4a74-8ac6-086c3267e836-tmp-dir\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413659 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413753 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit-dir\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413794 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/669ca75b-a358-4dbb-a96c-ca95caffcfa1-tmpfs\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413833 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a311df-cb31-4526-a1fc-3a58634d5dff-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413854 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413883 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.413898 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-encryption-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.414037 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.414080 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.414106 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.414125 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.414142 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a311df-cb31-4526-a1fc-3a58634d5dff-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515204 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515265 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-socket-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515288 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-images\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515309 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-node-bootstrap-token\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515386 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0588f21c-6055-4426-a75e-6e581b2f8b59-tmpfs\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515421 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515457 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515479 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzclr\" (UniqueName: \"kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515506 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515544 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515602 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a311df-cb31-4526-a1fc-3a58634d5dff-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.515624 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjgx\" (UniqueName: \"kubernetes.io/projected/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-kube-api-access-lqjgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516528 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-metrics-certs\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516574 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed6c878-a138-4828-adc1-2dea6827fc2b-serving-cert\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516633 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-config\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516660 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71fe9337-e0aa-4289-93b3-9aea0bdc284b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516694 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.516974 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517313 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/813993d2-50e6-4b33-9fc1-d354e519945a-signing-cabundle\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517542 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517587 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc982\" (UniqueName: \"kubernetes.io/projected/eed6c878-a138-4828-adc1-2dea6827fc2b-kube-api-access-mc982\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517648 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33877551-c042-4fde-bf15-c4d58e9c3321-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517667 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-client\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517687 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xfg\" (UniqueName: \"kubernetes.io/projected/86c4232c-55ee-4511-a00e-eea5740d1a68-kube-api-access-46xfg\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517715 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-94f97\" (UniqueName: \"kubernetes.io/projected/e30d7bf6-db40-4a33-9847-b27348b08821-kube-api-access-94f97\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517734 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517753 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkcc\" (UniqueName: \"kubernetes.io/projected/2a59f2db-3db1-423c-9b1e-287aded6f8c7-kube-api-access-7nkcc\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517771 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71abf881-27f0-4048-8f11-5585b96cf594-service-ca-bundle\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517797 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sdcm2\" (UniqueName: \"kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517818 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-config\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517838 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b110753-9089-443c-afc4-462284914075-tmp-dir\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517874 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-trusted-ca-bundle\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517892 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krdhv\" (UniqueName: \"kubernetes.io/projected/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-kube-api-access-krdhv\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517912 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86c4232c-55ee-4511-a00e-eea5740d1a68-tmpfs\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517932 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-certs\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517925 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a311df-cb31-4526-a1fc-3a58634d5dff-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.517967 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.518324 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.018307227 +0000 UTC m=+111.976543356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.518321 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.518533 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-config\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.519568 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-trusted-ca-bundle\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.520079 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.520726 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521013 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-service-ca\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521049 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/813993d2-50e6-4b33-9fc1-d354e519945a-signing-key\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521088 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvvp\" (UniqueName: \"kubernetes.io/projected/1fcecaab-3109-4c05-a95a-2e78bf76b2df-kube-api-access-2zvvp\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521112 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521133 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33877551-c042-4fde-bf15-c4d58e9c3321-config\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521153 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwhg\" (UniqueName: \"kubernetes.io/projected/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-kube-api-access-hbwhg\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521177 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-profile-collector-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.521878 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-config\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522183 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33877551-c042-4fde-bf15-c4d58e9c3321-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522315 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522356 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-service-ca\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522400 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522424 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6cb5412-1e3d-4632-96a0-48afa9db27bb-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522480 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522505 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-client\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.522526 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b110753-9089-443c-afc4-462284914075-config-volume\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523239 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523295 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30d7bf6-db40-4a33-9847-b27348b08821-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523357 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-65fkk\" (UniqueName: \"kubernetes.io/projected/71fe9337-e0aa-4289-93b3-9aea0bdc284b-kube-api-access-65fkk\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523427 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-image-import-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523478 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-webhook-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.523513 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lzm\" (UniqueName: \"kubernetes.io/projected/8c0e8717-0f55-4a74-8ac6-086c3267e836-kube-api-access-x8lzm\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.524736 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-image-import-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.524790 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.524839 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71fe9337-e0aa-4289-93b3-9aea0bdc284b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.526117 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33877551-c042-4fde-bf15-c4d58e9c3321-config\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.526204 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcecaab-3109-4c05-a95a-2e78bf76b2df-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.526618 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-auth-proxy-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.526663 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-plugins-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.526685 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2ct\" (UniqueName: \"kubernetes.io/projected/669ca75b-a358-4dbb-a96c-ca95caffcfa1-kube-api-access-mv2ct\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.527106 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa3816c0-0f04-4410-a56c-3602c754d5c0-machine-approver-tls\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.527150 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-srv-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.527253 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa3816c0-0f04-4410-a56c-3602c754d5c0-auth-proxy-config\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.532644 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-client\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537264 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537327 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcecaab-3109-4c05-a95a-2e78bf76b2df-config\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537453 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7bp\" (UniqueName: \"kubernetes.io/projected/fa3816c0-0f04-4410-a56c-3602c754d5c0-kube-api-access-xk7bp\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537497 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kqj\" (UniqueName: \"kubernetes.io/projected/c6cb5412-1e3d-4632-96a0-48afa9db27bb-kube-api-access-t2kqj\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537529 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-oauth-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537556 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537584 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-mountpoint-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33877551-c042-4fde-bf15-c4d58e9c3321-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537640 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjkbc\" (UniqueName: \"kubernetes.io/projected/7194dc7a-97fb-44de-b577-37143c6365e8-kube-api-access-xjkbc\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537672 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30d7bf6-db40-4a33-9847-b27348b08821-serving-cert\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537713 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a311df-cb31-4526-a1fc-3a58634d5dff-config\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537766 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537810 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-serving-cert\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537830 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkfp\" (UniqueName: \"kubernetes.io/projected/a335e035-e28e-4f63-8c7e-97b0059d0b13-kube-api-access-tmkfp\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537861 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-oauth-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537883 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537908 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-apiservice-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537934 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537958 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.537981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7bd\" (UniqueName: \"kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538036 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzqj7\" (UniqueName: \"kubernetes.io/projected/33877551-c042-4fde-bf15-c4d58e9c3321-kube-api-access-qzqj7\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538058 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71fe9337-e0aa-4289-93b3-9aea0bdc284b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538083 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqxx\" (UniqueName: \"kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538109 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-registration-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538132 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335e035-e28e-4f63-8c7e-97b0059d0b13-cert\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538177 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a59f2db-3db1-423c-9b1e-287aded6f8c7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538234 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a311df-cb31-4526-a1fc-3a58634d5dff-serving-cert\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538267 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkrn\" (UniqueName: \"kubernetes.io/projected/71abf881-27f0-4048-8f11-5585b96cf594-kube-api-access-trkrn\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538290 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-csi-data-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538318 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hsfb\" (UniqueName: \"kubernetes.io/projected/8062691e-72b4-422c-9215-b86e22d137c1-kube-api-access-5hsfb\") pod \"migrator-866fcbc849-bhqzk\" (UID: \"8062691e-72b4-422c-9215-b86e22d137c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538351 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlcx\" (UniqueName: \"kubernetes.io/projected/813993d2-50e6-4b33-9fc1-d354e519945a-kube-api-access-xmlcx\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538377 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-srv-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538401 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b110753-9089-443c-afc4-462284914075-metrics-tls\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538470 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538495 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spwm5\" (UniqueName: \"kubernetes.io/projected/b1823c5b-86dc-4bbf-8964-bc19dba82794-kube-api-access-spwm5\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538518 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7194dc7a-97fb-44de-b577-37143c6365e8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538544 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538570 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-serving-cert\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538588 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c0e8717-0f55-4a74-8ac6-086c3267e836-tmp-dir\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538611 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed6c878-a138-4828-adc1-2dea6827fc2b-config\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538636 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n642f\" (UniqueName: \"kubernetes.io/projected/6b110753-9089-443c-afc4-462284914075-kube-api-access-n642f\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538679 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538699 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-default-certificate\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.538874 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-srv-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.539855 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.541054 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcecaab-3109-4c05-a95a-2e78bf76b2df-config\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.541063 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.541685 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.542457 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a311df-cb31-4526-a1fc-3a58634d5dff-config\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.527610 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0e8717-0f55-4a74-8ac6-086c3267e836-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.546256 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-client\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.547229 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.548527 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit-dir\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.548602 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/669ca75b-a358-4dbb-a96c-ca95caffcfa1-tmpfs\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.549619 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.549749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a311df-cb31-4526-a1fc-3a58634d5dff-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.551133 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.551441 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c0e8717-0f55-4a74-8ac6-086c3267e836-tmp-dir\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553574 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553654 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553683 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-encryption-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553702 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1823c5b-86dc-4bbf-8964-bc19dba82794-oauth-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553771 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvt85\" (UniqueName: \"kubernetes.io/projected/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-kube-api-access-nvt85\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553817 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6wws\" (UniqueName: \"kubernetes.io/projected/0588f21c-6055-4426-a75e-6e581b2f8b59-kube-api-access-f6wws\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.553866 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-stats-auth\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.554112 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-audit-dir\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.554488 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8"] Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.554970 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.559852 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.560059 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.563909 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-serving-cert\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.563908 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa3816c0-0f04-4410-a56c-3602c754d5c0-machine-approver-tls\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.564728 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30d7bf6-db40-4a33-9847-b27348b08821-serving-cert\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.567224 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcecaab-3109-4c05-a95a-2e78bf76b2df-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.570122 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.570892 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/669ca75b-a358-4dbb-a96c-ca95caffcfa1-tmpfs\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.571267 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/669ca75b-a358-4dbb-a96c-ca95caffcfa1-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.574756 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33877551-c042-4fde-bf15-c4d58e9c3321-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.575751 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.579987 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71fe9337-e0aa-4289-93b3-9aea0bdc284b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.579978 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a311df-cb31-4526-a1fc-3a58634d5dff-serving-cert\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.580494 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-94f97\" (UniqueName: \"kubernetes.io/projected/e30d7bf6-db40-4a33-9847-b27348b08821-kube-api-access-94f97\") pod \"authentication-operator-7f5c659b84-sd7xt\" (UID: \"e30d7bf6-db40-4a33-9847-b27348b08821\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.581100 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-encryption-config\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.581519 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-serving-cert\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.582448 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0e8717-0f55-4a74-8ac6-086c3267e836-serving-cert\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.584752 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.589008 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1823c5b-86dc-4bbf-8964-bc19dba82794-console-oauth-config\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.597073 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-bvwvj"] Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.601989 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdcm2\" (UniqueName: \"kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2\") pod \"image-pruner-29495520-r6lp4\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.602009 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdhv\" (UniqueName: \"kubernetes.io/projected/0f4aee94-d32d-43e7-93b1-40c3a05ed8ef-kube-api-access-krdhv\") pod \"apiserver-9ddfb9f55-4qlhj\" (UID: \"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.603753 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.609149 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" event={"ID":"a2642451-0e0a-4ffb-9356-e7d67106f912","Type":"ContainerStarted","Data":"c506b70a5a67a135751be492f6a1845614a8d4ed2b069261c165eac6f2b39bb2"} Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.628603 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvvp\" (UniqueName: \"kubernetes.io/projected/1fcecaab-3109-4c05-a95a-2e78bf76b2df-kube-api-access-2zvvp\") pod \"kube-storage-version-migrator-operator-565b79b866-tkjk4\" (UID: \"1fcecaab-3109-4c05-a95a-2e78bf76b2df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: W0130 00:14:09.638430 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce26ddbd_18c7_48f6_83bf_1124a0467647.slice/crio-095a364344cbf9bf37b153a0520c50f8f3499c2e19614db55b3b2be29ac20b9e WatchSource:0}: Error finding container 095a364344cbf9bf37b153a0520c50f8f3499c2e19614db55b3b2be29ac20b9e: Status 404 returned error can't find the container with id 095a364344cbf9bf37b153a0520c50f8f3499c2e19614db55b3b2be29ac20b9e Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.648318 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-65fkk\" (UniqueName: \"kubernetes.io/projected/71fe9337-e0aa-4289-93b3-9aea0bdc284b-kube-api-access-65fkk\") pod \"machine-config-controller-f9cdd68f7-s8j52\" (UID: \"71fe9337-e0aa-4289-93b3-9aea0bdc284b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.654796 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655038 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nvt85\" (UniqueName: \"kubernetes.io/projected/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-kube-api-access-nvt85\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655073 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6wws\" (UniqueName: \"kubernetes.io/projected/0588f21c-6055-4426-a75e-6e581b2f8b59-kube-api-access-f6wws\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655100 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-stats-auth\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655121 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-socket-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655265 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-images\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-node-bootstrap-token\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655303 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0588f21c-6055-4426-a75e-6e581b2f8b59-tmpfs\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655323 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655484 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzclr\" (UniqueName: \"kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655506 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqjgx\" (UniqueName: \"kubernetes.io/projected/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-kube-api-access-lqjgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655529 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-metrics-certs\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655548 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed6c878-a138-4828-adc1-2dea6827fc2b-serving-cert\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655578 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/813993d2-50e6-4b33-9fc1-d354e519945a-signing-cabundle\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655597 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mc982\" (UniqueName: \"kubernetes.io/projected/eed6c878-a138-4828-adc1-2dea6827fc2b-kube-api-access-mc982\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655618 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-46xfg\" (UniqueName: \"kubernetes.io/projected/86c4232c-55ee-4511-a00e-eea5740d1a68-kube-api-access-46xfg\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655639 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkcc\" (UniqueName: \"kubernetes.io/projected/2a59f2db-3db1-423c-9b1e-287aded6f8c7-kube-api-access-7nkcc\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655656 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71abf881-27f0-4048-8f11-5585b96cf594-service-ca-bundle\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655674 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b110753-9089-443c-afc4-462284914075-tmp-dir\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655694 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86c4232c-55ee-4511-a00e-eea5740d1a68-tmpfs\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655710 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-certs\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655735 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/813993d2-50e6-4b33-9fc1-d354e519945a-signing-key\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655753 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbwhg\" (UniqueName: \"kubernetes.io/projected/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-kube-api-access-hbwhg\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655771 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-profile-collector-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655833 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6cb5412-1e3d-4632-96a0-48afa9db27bb-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b110753-9089-443c-afc4-462284914075-config-volume\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655871 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-webhook-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655895 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-plugins-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655929 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2kqj\" (UniqueName: \"kubernetes.io/projected/c6cb5412-1e3d-4632-96a0-48afa9db27bb-kube-api-access-t2kqj\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655955 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655972 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-mountpoint-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.655992 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjkbc\" (UniqueName: \"kubernetes.io/projected/7194dc7a-97fb-44de-b577-37143c6365e8-kube-api-access-xjkbc\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656035 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkfp\" (UniqueName: \"kubernetes.io/projected/a335e035-e28e-4f63-8c7e-97b0059d0b13-kube-api-access-tmkfp\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656053 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-apiservice-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656083 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-registration-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656098 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335e035-e28e-4f63-8c7e-97b0059d0b13-cert\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656122 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a59f2db-3db1-423c-9b1e-287aded6f8c7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656148 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trkrn\" (UniqueName: \"kubernetes.io/projected/71abf881-27f0-4048-8f11-5585b96cf594-kube-api-access-trkrn\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656162 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-csi-data-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656179 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hsfb\" (UniqueName: \"kubernetes.io/projected/8062691e-72b4-422c-9215-b86e22d137c1-kube-api-access-5hsfb\") pod \"migrator-866fcbc849-bhqzk\" (UID: \"8062691e-72b4-422c-9215-b86e22d137c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656196 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmlcx\" (UniqueName: \"kubernetes.io/projected/813993d2-50e6-4b33-9fc1-d354e519945a-kube-api-access-xmlcx\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656227 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-srv-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656242 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b110753-9089-443c-afc4-462284914075-metrics-tls\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656267 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7194dc7a-97fb-44de-b577-37143c6365e8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656313 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed6c878-a138-4828-adc1-2dea6827fc2b-config\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656344 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n642f\" (UniqueName: \"kubernetes.io/projected/6b110753-9089-443c-afc4-462284914075-kube-api-access-n642f\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656367 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-default-certificate\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656382 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.656530 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.656616 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.156596435 +0000 UTC m=+112.114832564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.658070 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b110753-9089-443c-afc4-462284914075-config-volume\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.659350 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-socket-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.659686 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-mountpoint-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.659984 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-images\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.660435 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71abf881-27f0-4048-8f11-5585b96cf594-service-ca-bundle\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.660710 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b110753-9089-443c-afc4-462284914075-tmp-dir\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.661022 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86c4232c-55ee-4511-a00e-eea5740d1a68-tmpfs\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.663787 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.666146 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335e035-e28e-4f63-8c7e-97b0059d0b13-cert\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.666199 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-plugins-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.666763 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0588f21c-6055-4426-a75e-6e581b2f8b59-tmpfs\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.667001 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.669893 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-stats-auth\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.670962 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed6c878-a138-4828-adc1-2dea6827fc2b-config\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.672056 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-registration-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.672770 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6cb5412-1e3d-4632-96a0-48afa9db27bb-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.673795 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-csi-data-dir\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.675147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/813993d2-50e6-4b33-9fc1-d354e519945a-signing-cabundle\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.676869 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-apiservice-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.678870 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-certs\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.678905 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-profile-collector-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.680754 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.681726 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0588f21c-6055-4426-a75e-6e581b2f8b59-webhook-cert\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.688784 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed6c878-a138-4828-adc1-2dea6827fc2b-serving-cert\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.695021 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c4232c-55ee-4511-a00e-eea5740d1a68-srv-cert\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.695144 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b110753-9089-443c-afc4-462284914075-metrics-tls\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.695178 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lzm\" (UniqueName: \"kubernetes.io/projected/8c0e8717-0f55-4a74-8ac6-086c3267e836-kube-api-access-x8lzm\") pod \"etcd-operator-69b85846b6-j5gfn\" (UID: \"8c0e8717-0f55-4a74-8ac6-086c3267e836\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.695405 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a59f2db-3db1-423c-9b1e-287aded6f8c7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.696098 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-default-certificate\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.696734 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7194dc7a-97fb-44de-b577-37143c6365e8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.696932 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71abf881-27f0-4048-8f11-5585b96cf594-metrics-certs\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.698471 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-node-bootstrap-token\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.698947 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/813993d2-50e6-4b33-9fc1-d354e519945a-signing-key\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.699398 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2ct\" (UniqueName: \"kubernetes.io/projected/669ca75b-a358-4dbb-a96c-ca95caffcfa1-kube-api-access-mv2ct\") pod \"catalog-operator-75ff9f647d-g5znk\" (UID: \"669ca75b-a358-4dbb-a96c-ca95caffcfa1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.700041 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6cb5412-1e3d-4632-96a0-48afa9db27bb-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.709098 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.721422 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.728713 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.731488 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.733933 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqxx\" (UniqueName: \"kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx\") pod \"marketplace-operator-547dbd544d-kxkkt\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.737604 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.747934 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7bp\" (UniqueName: \"kubernetes.io/projected/fa3816c0-0f04-4410-a56c-3602c754d5c0-kube-api-access-xk7bp\") pod \"machine-approver-54c688565-6bhnt\" (UID: \"fa3816c0-0f04-4410-a56c-3602c754d5c0\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.748032 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzqj7\" (UniqueName: \"kubernetes.io/projected/33877551-c042-4fde-bf15-c4d58e9c3321-kube-api-access-qzqj7\") pod \"openshift-controller-manager-operator-686468bdd5-gd6hl\" (UID: \"33877551-c042-4fde-bf15-c4d58e9c3321\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.758013 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.761425 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.261387664 +0000 UTC m=+112.219623793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.773744 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7bd\" (UniqueName: \"kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd\") pod \"collect-profiles-29495520-jjx65\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.783557 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.798297 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwm5\" (UniqueName: \"kubernetes.io/projected/b1823c5b-86dc-4bbf-8964-bc19dba82794-kube-api-access-spwm5\") pod \"console-64d44f6ddf-q9fd8\" (UID: \"b1823c5b-86dc-4bbf-8964-bc19dba82794\") " pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.821850 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a311df-cb31-4526-a1fc-3a58634d5dff-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tnkh5\" (UID: \"a4a311df-cb31-4526-a1fc-3a58634d5dff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.823667 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-k8w5p"] Jan 30 00:14:09 crc kubenswrapper[5110]: W0130 00:14:09.849643 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f225323_7f5a_46bf_a9a3_1093d025b0b7.slice/crio-1e7a56c0c8b37e3255747ec656af3c9bc9c580c80d881214d1839350ae397790 WatchSource:0}: Error finding container 1e7a56c0c8b37e3255747ec656af3c9bc9c580c80d881214d1839350ae397790: Status 404 returned error can't find the container with id 1e7a56c0c8b37e3255747ec656af3c9bc9c580c80d881214d1839350ae397790 Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.854159 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-tkt8c"] Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.855355 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvt85\" (UniqueName: \"kubernetes.io/projected/e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8-kube-api-access-nvt85\") pod \"machine-config-server-9q6kw\" (UID: \"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8\") " pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.859125 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.859731 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.359712324 +0000 UTC m=+112.317948443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.867653 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.876974 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9q6kw" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.878286 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6wws\" (UniqueName: \"kubernetes.io/projected/0588f21c-6055-4426-a75e-6e581b2f8b59-kube-api-access-f6wws\") pod \"packageserver-7d4fc7d867-ln5fr\" (UID: \"0588f21c-6055-4426-a75e-6e581b2f8b59\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.885779 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c"] Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.889605 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.906819 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xfg\" (UniqueName: \"kubernetes.io/projected/86c4232c-55ee-4511-a00e-eea5740d1a68-kube-api-access-46xfg\") pod \"olm-operator-5cdf44d969-qpjvn\" (UID: \"86c4232c-55ee-4511-a00e-eea5740d1a68\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.912565 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.913202 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkcc\" (UniqueName: \"kubernetes.io/projected/2a59f2db-3db1-423c-9b1e-287aded6f8c7-kube-api-access-7nkcc\") pod \"package-server-manager-77f986bd66-52wpg\" (UID: \"2a59f2db-3db1-423c-9b1e-287aded6f8c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.918108 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.925192 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.933360 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjkbc\" (UniqueName: \"kubernetes.io/projected/7194dc7a-97fb-44de-b577-37143c6365e8-kube-api-access-xjkbc\") pod \"multus-admission-controller-69db94689b-dvkqm\" (UID: \"7194dc7a-97fb-44de-b577-37143c6365e8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.943380 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc"] Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.946311 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkfp\" (UniqueName: \"kubernetes.io/projected/a335e035-e28e-4f63-8c7e-97b0059d0b13-kube-api-access-tmkfp\") pod \"ingress-canary-vvz9f\" (UID: \"a335e035-e28e-4f63-8c7e-97b0059d0b13\") " pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.952661 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.961615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:09 crc kubenswrapper[5110]: E0130 00:14:09.962262 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.462247004 +0000 UTC m=+112.420483133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.968618 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2kqj\" (UniqueName: \"kubernetes.io/projected/c6cb5412-1e3d-4632-96a0-48afa9db27bb-kube-api-access-t2kqj\") pod \"machine-config-operator-67c9d58cbb-lf7lb\" (UID: \"c6cb5412-1e3d-4632-96a0-48afa9db27bb\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:09 crc kubenswrapper[5110]: I0130 00:14:09.985528 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.012325 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqjgx\" (UniqueName: \"kubernetes.io/projected/327eaa18-356c-4a5b-a6e2-a6cea319d8cb-kube-api-access-lqjgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tpzqq\" (UID: \"327eaa18-356c-4a5b-a6e2-a6cea319d8cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.023152 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.024587 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzclr\" (UniqueName: \"kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr\") pod \"cni-sysctl-allowlist-ds-zfs4d\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.033629 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trkrn\" (UniqueName: \"kubernetes.io/projected/71abf881-27f0-4048-8f11-5585b96cf594-kube-api-access-trkrn\") pod \"router-default-68cf44c8b8-q4bkd\" (UID: \"71abf881-27f0-4048-8f11-5585b96cf594\") " pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.043584 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.048897 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n642f\" (UniqueName: \"kubernetes.io/projected/6b110753-9089-443c-afc4-462284914075-kube-api-access-n642f\") pod \"dns-default-tm4w8\" (UID: \"6b110753-9089-443c-afc4-462284914075\") " pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.053898 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.060611 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.062917 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.063567 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.563542342 +0000 UTC m=+112.521778471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.066595 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.076979 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbwhg\" (UniqueName: \"kubernetes.io/projected/39298aae-aa93-40ad-8dfc-9d5fdea9ae10-kube-api-access-hbwhg\") pod \"csi-hostpathplugin-2dmcg\" (UID: \"39298aae-aa93-40ad-8dfc-9d5fdea9ae10\") " pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.089019 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.091227 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.101920 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmlcx\" (UniqueName: \"kubernetes.io/projected/813993d2-50e6-4b33-9fc1-d354e519945a-kube-api-access-xmlcx\") pod \"service-ca-74545575db-t2qff\" (UID: \"813993d2-50e6-4b33-9fc1-d354e519945a\") " pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.105781 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.115766 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hsfb\" (UniqueName: \"kubernetes.io/projected/8062691e-72b4-422c-9215-b86e22d137c1-kube-api-access-5hsfb\") pod \"migrator-866fcbc849-bhqzk\" (UID: \"8062691e-72b4-422c-9215-b86e22d137c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.122732 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-t2qff" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.129246 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vvz9f" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.131660 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc982\" (UniqueName: \"kubernetes.io/projected/eed6c878-a138-4828-adc1-2dea6827fc2b-kube-api-access-mc982\") pod \"service-ca-operator-5b9c976747-5fsr2\" (UID: \"eed6c878-a138-4828-adc1-2dea6827fc2b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.136801 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.151209 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.157890 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.167462 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.168780 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.169155 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.669138782 +0000 UTC m=+112.627374911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.185613 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.270008 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.271520 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.771494008 +0000 UTC m=+112.729730137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.271998 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.272548 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.772533535 +0000 UTC m=+112.730769664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: W0130 00:14:10.276105 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18461f11_f1b2_43b3_b1c1_9fc3ee55283c.slice/crio-99c32405864cdc8f52dc4010bdfe7dc0f9c987a72f7c28b2d131ee1bb55f0104 WatchSource:0}: Error finding container 99c32405864cdc8f52dc4010bdfe7dc0f9c987a72f7c28b2d131ee1bb55f0104: Status 404 returned error can't find the container with id 99c32405864cdc8f52dc4010bdfe7dc0f9c987a72f7c28b2d131ee1bb55f0104 Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.373288 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.374040 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.874021268 +0000 UTC m=+112.832257397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.378720 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.398280 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.405881 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-r2msw"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.406698 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.413775 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.419881 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" podStartSLOduration=90.419865441 podStartE2EDuration="1m30.419865441s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:10.419386308 +0000 UTC m=+112.377622437" watchObservedRunningTime="2026-01-30 00:14:10.419865441 +0000 UTC m=+112.378101570" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.428553 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-r6lp4"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.475394 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.475713 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:10.975698815 +0000 UTC m=+112.933934944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.511227 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-7vndv" podStartSLOduration=90.511204227 podStartE2EDuration="1m30.511204227s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:10.507070809 +0000 UTC m=+112.465306948" watchObservedRunningTime="2026-01-30 00:14:10.511204227 +0000 UTC m=+112.469440366" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.523541 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.579035 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.579202 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.07917038 +0000 UTC m=+113.037406499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.579777 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.580232 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.080224648 +0000 UTC m=+113.038460777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: W0130 00:14:10.583289 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e56c2e9_b76d_4ffe_9af6_dd6850d11a40.slice/crio-3429d380c5214e63b1bcfd9eae9503c130f864cd767155e45f883f2e6c5b55c7 WatchSource:0}: Error finding container 3429d380c5214e63b1bcfd9eae9503c130f864cd767155e45f883f2e6c5b55c7: Status 404 returned error can't find the container with id 3429d380c5214e63b1bcfd9eae9503c130f864cd767155e45f883f2e6c5b55c7 Jan 30 00:14:10 crc kubenswrapper[5110]: W0130 00:14:10.607400 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda070c3b8_7e87_4386_98d0_7ed3aaa53772.slice/crio-a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24 WatchSource:0}: Error finding container a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24: Status 404 returned error can't find the container with id a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24 Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.680677 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.681091 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.682828 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" event={"ID":"ce26ddbd-18c7-48f6-83bf-1124a0467647","Type":"ContainerStarted","Data":"6070616152b796f4696f83d2e09e241851e21157a49031ecc31a9738ab894fda"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.682906 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" event={"ID":"ce26ddbd-18c7-48f6-83bf-1124a0467647","Type":"ContainerStarted","Data":"095a364344cbf9bf37b153a0520c50f8f3499c2e19614db55b3b2be29ac20b9e"} Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.684090 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.184058471 +0000 UTC m=+113.142294750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.700269 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.705501 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" event={"ID":"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40","Type":"ContainerStarted","Data":"3429d380c5214e63b1bcfd9eae9503c130f864cd767155e45f883f2e6c5b55c7"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.731236 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9q6kw" event={"ID":"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8","Type":"ContainerStarted","Data":"34c2af440f41b064946d943be9796f6ac0ce311ede25cf4d362ac0c76059b6f1"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.731293 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9q6kw" event={"ID":"e67fb2b2-7eb7-4595-b3e6-02a7f70f59a8","Type":"ContainerStarted","Data":"09a717b21398fca4103f663c6c5e306ba0747d21d694a66d3ac218f100178ea3"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.735411 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerStarted","Data":"defcb9ae9bfda8979a7f29ddcb791c1f646ff5dadbb5434291bac726c774d343"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.746020 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" event={"ID":"1fcecaab-3109-4c05-a95a-2e78bf76b2df","Type":"ContainerStarted","Data":"f14ac8bf3973aa69cf0247ec581f5160b62a32b8922b15368c81f0050ff29882"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.754840 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" event={"ID":"f7a48a07-3ab8-4b38-be60-baa4f39a0757","Type":"ContainerStarted","Data":"13c51af577ea67184c2ac0779acd038b0a2c79e4cb8f0aa6415ef67a66e23699"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.764595 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" event={"ID":"71abf881-27f0-4048-8f11-5585b96cf594","Type":"ContainerStarted","Data":"16ad3e243b329c0b0eec1fed980ea77004ec83474b78d0fc32f435d466971838"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.774115 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" event={"ID":"3eb577e7-5470-45ba-bdfe-2b19eeed6a43","Type":"ContainerStarted","Data":"35d0044364ca2fe8c7180e1a4ae44e84aa9f1851bec93c2406b7a3e6423187f4"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.774159 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" event={"ID":"3eb577e7-5470-45ba-bdfe-2b19eeed6a43","Type":"ContainerStarted","Data":"e0c4110590de255e00b544c59bc5c81c8a40753bf040601e66f6de129cfaf266"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.776775 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.780003 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-bvwvj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.780074 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" podUID="3eb577e7-5470-45ba-bdfe-2b19eeed6a43" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.785860 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-k8w5p" event={"ID":"1f225323-7f5a-46bf-a9a3-1093d025b0b7","Type":"ContainerStarted","Data":"ea239b09931802f29d3c4d52f471fd6d924f7363322bffea66055d0579a6cc75"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.785967 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-k8w5p" event={"ID":"1f225323-7f5a-46bf-a9a3-1093d025b0b7","Type":"ContainerStarted","Data":"1e7a56c0c8b37e3255747ec656af3c9bc9c580c80d881214d1839350ae397790"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.787188 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.787239 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.787265 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.787369 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.787404 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.788519 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.288498471 +0000 UTC m=+113.246734600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.789048 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.793628 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.799224 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" event={"ID":"959555fc-6a2d-4e6c-bc87-84864eeacb39","Type":"ContainerStarted","Data":"20b1b3389537a5506779694c77597f73bc4ab174e3b0a26dccea1ee05e227d77"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.799264 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" event={"ID":"959555fc-6a2d-4e6c-bc87-84864eeacb39","Type":"ContainerStarted","Data":"67c95e0a841aa9cdcc1a98a4f35090a824f0763d5b58f00cc4f3d6263b67a403"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.802180 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-k8w5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.802234 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-k8w5p" podUID="1f225323-7f5a-46bf-a9a3-1093d025b0b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.803947 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" event={"ID":"669ca75b-a358-4dbb-a96c-ca95caffcfa1","Type":"ContainerStarted","Data":"3facb3bd687a8598a4b97aa687cab3f608184c7e75af2379510eccebdc2caca3"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.806682 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.807160 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbd252e-c54f-4a19-b637-adb4d23722fc-metrics-certs\") pod \"network-metrics-daemon-vwf28\" (UID: \"1fbd252e-c54f-4a19-b637-adb4d23722fc\") " pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.807190 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" event={"ID":"fa3816c0-0f04-4410-a56c-3602c754d5c0","Type":"ContainerStarted","Data":"ea221943bd6e5d03a08505ac57d76dd1b57646492b022d9f76dec988f0b36acd"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.810373 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.835573 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" event={"ID":"99ef4716-24ef-4a6b-9a9f-32a139d60aeb","Type":"ContainerStarted","Data":"265ed3481c2f394adc3158cfa07c2a100a0b7987b184527eaf096fda7b351a7d"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.846114 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" event={"ID":"0c2ae4ef-cf7d-4c77-9892-00d84584bed1","Type":"ContainerStarted","Data":"109b5f8b4aa138089f4e0270d60c921292b987f50dac55c26df4fdcb711f5443"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.848717 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.854396 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4qlhj"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.854462 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" event={"ID":"a6606de8-e3c8-4a97-ae81-0b526c53fc1c","Type":"ContainerStarted","Data":"bb00ade24904b87a95e441de431e5dbb381bbfd2916ae8ef99b4a6e9a45c7f06"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.859369 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" event={"ID":"18461f11-f1b2-43b3-b1c1-9fc3ee55283c","Type":"ContainerStarted","Data":"99c32405864cdc8f52dc4010bdfe7dc0f9c987a72f7c28b2d131ee1bb55f0104"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.866307 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" event={"ID":"c9c50ad8-bd30-431a-80b1-290290cc1ea8","Type":"ContainerStarted","Data":"38afc0f485ece75ae5fc3ea1f91deed6a117dfd05fb05b49df8a85fc1601263e"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.866397 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" event={"ID":"c9c50ad8-bd30-431a-80b1-290290cc1ea8","Type":"ContainerStarted","Data":"2cdb14421454e52409a00e10ecb892298ecb3b2c964a1f710780ddfb36fa2cff"} Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.888054 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.888314 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.388285379 +0000 UTC m=+113.346521508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.889046 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.890441 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.390425206 +0000 UTC m=+113.348661335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.896722 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.902506 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.945290 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52"] Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.991621 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:10 crc kubenswrapper[5110]: E0130 00:14:10.992219 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.492190406 +0000 UTC m=+113.450426535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:10 crc kubenswrapper[5110]: I0130 00:14:10.993312 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:10.992370 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:10.999661 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.499641681 +0000 UTC m=+113.457877810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.004284 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwf28" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.098342 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.099446 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.599403899 +0000 UTC m=+113.557640028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.108527 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.111660 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.61164637 +0000 UTC m=+113.569882499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: W0130 00:14:11.198361 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71fe9337_e0aa_4289_93b3_9aea0bdc284b.slice/crio-e08324a61c1d2b7d2285ff3921b3e7b907fbdb94e3579091a872f1acddf93e8d WatchSource:0}: Error finding container e08324a61c1d2b7d2285ff3921b3e7b907fbdb94e3579091a872f1acddf93e8d: Status 404 returned error can't find the container with id e08324a61c1d2b7d2285ff3921b3e7b907fbdb94e3579091a872f1acddf93e8d Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.213558 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.213910 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.713890842 +0000 UTC m=+113.672126971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.215771 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.226409 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-q9fd8"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.251195 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.298105 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.302748 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.315132 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.315705 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.815689193 +0000 UTC m=+113.773925322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.316009 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.416119 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.417064 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.917042302 +0000 UTC m=+113.875278431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.417583 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.417980 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:11.917971047 +0000 UTC m=+113.876207166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.452348 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-t2qff"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.456984 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.483856 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.497575 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vvz9f"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.497623 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dvkqm"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.520559 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.520913 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.020889637 +0000 UTC m=+113.979125766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.523204 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.526806 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fwqsr" podStartSLOduration=91.526784752 podStartE2EDuration="1m31.526784752s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.519266424 +0000 UTC m=+113.477502553" watchObservedRunningTime="2026-01-30 00:14:11.526784752 +0000 UTC m=+113.485020881" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.562572 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.638491 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.638882 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.138867442 +0000 UTC m=+114.097103571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.650548 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" podStartSLOduration=91.650521378 podStartE2EDuration="1m31.650521378s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.626795506 +0000 UTC m=+113.585031635" watchObservedRunningTime="2026-01-30 00:14:11.650521378 +0000 UTC m=+113.608757507" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.650815 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2dmcg"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.673078 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.674972 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tm4w8"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.676578 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn"] Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.740076 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.740667 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.240605162 +0000 UTC m=+114.198841301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.840048 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9q6kw" podStartSLOduration=5.84002992 podStartE2EDuration="5.84002992s" podCreationTimestamp="2026-01-30 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.837994157 +0000 UTC m=+113.796230296" watchObservedRunningTime="2026-01-30 00:14:11.84002992 +0000 UTC m=+113.798266049" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.843812 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.844171 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.344155318 +0000 UTC m=+114.302391447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.887121 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" podStartSLOduration=91.887095655 podStartE2EDuration="1m31.887095655s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.884664811 +0000 UTC m=+113.842900940" watchObservedRunningTime="2026-01-30 00:14:11.887095655 +0000 UTC m=+113.845331784" Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.948706 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:11 crc kubenswrapper[5110]: E0130 00:14:11.949556 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.449536843 +0000 UTC m=+114.407772972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.953448 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" event={"ID":"71fe9337-e0aa-4289-93b3-9aea0bdc284b","Type":"ContainerStarted","Data":"e08324a61c1d2b7d2285ff3921b3e7b907fbdb94e3579091a872f1acddf93e8d"} Jan 30 00:14:11 crc kubenswrapper[5110]: W0130 00:14:11.997816 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8062691e_72b4_422c_9215_b86e22d137c1.slice/crio-c9f664c50b13bd8b262f93c52cfb319ccb52036b13ebdef3847622a50db02359 WatchSource:0}: Error finding container c9f664c50b13bd8b262f93c52cfb319ccb52036b13ebdef3847622a50db02359: Status 404 returned error can't find the container with id c9f664c50b13bd8b262f93c52cfb319ccb52036b13ebdef3847622a50db02359 Jan 30 00:14:11 crc kubenswrapper[5110]: I0130 00:14:11.998062 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" event={"ID":"959555fc-6a2d-4e6c-bc87-84864eeacb39","Type":"ContainerStarted","Data":"0603f4ce8287bb03b93b9b87a2d47016f6fe59bc74d3c068c9fe580b278a5560"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.020555 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" event={"ID":"2a59f2db-3db1-423c-9b1e-287aded6f8c7","Type":"ContainerStarted","Data":"8e358240e10a575baf01f762feea7ba84efa1400a9d47ab2aa4311415e8ec530"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.056580 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.057406 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.557389323 +0000 UTC m=+114.515625452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.069604 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" event={"ID":"c6cb5412-1e3d-4632-96a0-48afa9db27bb","Type":"ContainerStarted","Data":"96b1118a2ff319233a6a601c15486f29042a52d621613a82f0590eb2fa5d8a51"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.070839 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" event={"ID":"e30d7bf6-db40-4a33-9847-b27348b08821","Type":"ContainerStarted","Data":"90d5dca3967cd01462381f46ef65dcc0081a5a00fb67ec684f844a65fca4ef23"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.076460 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" event={"ID":"669ca75b-a358-4dbb-a96c-ca95caffcfa1","Type":"ContainerStarted","Data":"2a1259420074111b6f0da9a81767a9b3e337aba949ce0107f4ca3bb3699268ee"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.078265 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.078994 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vvz9f" event={"ID":"a335e035-e28e-4f63-8c7e-97b0059d0b13","Type":"ContainerStarted","Data":"96f79d39799e4a26b8c3185237bd9938afbc8b5d721437a103931721924e6b22"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.081590 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" event={"ID":"a4a311df-cb31-4526-a1fc-3a58634d5dff","Type":"ContainerStarted","Data":"eb1b6b9db7cdab6f1c9c4cef6d1d4a393c61d588f009cae2dc2d94472782168a"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.092324 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" event={"ID":"8c0e8717-0f55-4a74-8ac6-086c3267e836","Type":"ContainerStarted","Data":"9513f4068b6f3683e9ce9a355adbfbfd2a46f14882e880ad347a25132a259dc7"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.102138 5110 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-g5znk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.102219 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" podUID="669ca75b-a358-4dbb-a96c-ca95caffcfa1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.103943 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-q9fd8" event={"ID":"b1823c5b-86dc-4bbf-8964-bc19dba82794","Type":"ContainerStarted","Data":"e3a1787e52f1815f6cd908783de6df1bfe8858084bf455d4801698a4d2e48206"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.118478 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" event={"ID":"fa3816c0-0f04-4410-a56c-3602c754d5c0","Type":"ContainerStarted","Data":"41a57e640dafda47fdad158839a58754a1dbdcddbc3baf0bdfbe8ae0f16c44e1"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.122133 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" event={"ID":"0c2ae4ef-cf7d-4c77-9892-00d84584bed1","Type":"ContainerStarted","Data":"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.122806 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.147587 5110 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-vwh7r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.147673 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.171083 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.172323 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.672287778 +0000 UTC m=+114.630523927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.179780 5110 generic.go:358] "Generic (PLEG): container finished" podID="c9c50ad8-bd30-431a-80b1-290290cc1ea8" containerID="38afc0f485ece75ae5fc3ea1f91deed6a117dfd05fb05b49df8a85fc1601263e" exitCode=0 Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.179972 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" event={"ID":"c9c50ad8-bd30-431a-80b1-290290cc1ea8","Type":"ContainerDied","Data":"38afc0f485ece75ae5fc3ea1f91deed6a117dfd05fb05b49df8a85fc1601263e"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.191163 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-t2qff" event={"ID":"813993d2-50e6-4b33-9fc1-d354e519945a","Type":"ContainerStarted","Data":"213ef7a9f0369305b92de8b338556f4c7fc436718b551b2647b5f4caf87f1df8"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.273982 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.274859 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.774840838 +0000 UTC m=+114.733076967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.275260 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" event={"ID":"0588f21c-6055-4426-a75e-6e581b2f8b59","Type":"ContainerStarted","Data":"20728233e57a12ec90958ed537222f9f4f170a59608d8a7b03b675a97ab30948"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.349207 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-r6lp4" event={"ID":"a070c3b8-7e87-4386-98d0-7ed3aaa53772","Type":"ContainerStarted","Data":"92ed3f2d1d5c3b6c9b77ffb3cacfa3c73dba13815c977931be648840e1aaf89e"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.349253 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-r6lp4" event={"ID":"a070c3b8-7e87-4386-98d0-7ed3aaa53772","Type":"ContainerStarted","Data":"a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.372448 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" event={"ID":"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef","Type":"ContainerStarted","Data":"573a7b891380ca1d6b350af13133b850382ab310c25289431f93faf47b9854b3"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.376188 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.376528 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.876504966 +0000 UTC m=+114.834741095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.377596 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" event={"ID":"327eaa18-356c-4a5b-a6e2-a6cea319d8cb","Type":"ContainerStarted","Data":"73d1a2bd1a54d09096387a1d74e3f32e7850c8a55f3b3b9f06574a54b5311b20"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.381641 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" event={"ID":"33877551-c042-4fde-bf15-c4d58e9c3321","Type":"ContainerStarted","Data":"6f4642366f5918995c8e06adac6e00a7202bfb244ac3e3b990dfbcfa240e866c"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.394179 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" event={"ID":"f7a48a07-3ab8-4b38-be60-baa4f39a0757","Type":"ContainerStarted","Data":"2521d1661978a9c8e9fd7ba0ff6e6c7bbbcee7a00b518633f1560a4adad2263d"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.404502 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" event={"ID":"99861aba-0721-4a1b-9156-438f84b1480c","Type":"ContainerStarted","Data":"c5259e8f3dd9637ce888997c3a096b3c7e662669d9a4a486b3862a6c65970f55"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.430849 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" event={"ID":"71abf881-27f0-4048-8f11-5585b96cf594","Type":"ContainerStarted","Data":"c264d1baf8c050049521c91c38082ebe4d3202696c9b12f6a31abd7ce2442c65"} Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.433529 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-k8w5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.433667 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-k8w5p" podUID="1f225323-7f5a-46bf-a9a3-1093d025b0b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.449698 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-bvwvj" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.481639 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vwf28"] Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.487475 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.491266 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:12.991250176 +0000 UTC m=+114.949486305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: W0130 00:14:12.567797 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-b71b532be1fdb113966b13d76ccc9d5200645a7376e7bb8fa631f39c13742ebb WatchSource:0}: Error finding container b71b532be1fdb113966b13d76ccc9d5200645a7376e7bb8fa631f39c13742ebb: Status 404 returned error can't find the container with id b71b532be1fdb113966b13d76ccc9d5200645a7376e7bb8fa631f39c13742ebb Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.577503 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kp4h8" podStartSLOduration=93.577456058 podStartE2EDuration="1m33.577456058s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:12.541667459 +0000 UTC m=+114.499903588" watchObservedRunningTime="2026-01-30 00:14:12.577456058 +0000 UTC m=+114.535692197" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.597841 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.598263 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.098223783 +0000 UTC m=+115.056459912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.602046 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.603022 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.624493 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.700297 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.701097 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.201076861 +0000 UTC m=+115.159312990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.764532 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-k8w5p" podStartSLOduration=92.764505425 podStartE2EDuration="1m32.764505425s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:12.72274387 +0000 UTC m=+114.680979989" watchObservedRunningTime="2026-01-30 00:14:12.764505425 +0000 UTC m=+114.722741554" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.808191 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" podStartSLOduration=92.808175821 podStartE2EDuration="1m32.808175821s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:12.806861957 +0000 UTC m=+114.765098086" watchObservedRunningTime="2026-01-30 00:14:12.808175821 +0000 UTC m=+114.766411950" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.809309 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.809595 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.309575468 +0000 UTC m=+115.267811597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.894933 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mxp6c" podStartSLOduration=92.894913217 podStartE2EDuration="1m32.894913217s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:12.889149206 +0000 UTC m=+114.847385345" watchObservedRunningTime="2026-01-30 00:14:12.894913217 +0000 UTC m=+114.853149346" Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.911596 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:12 crc kubenswrapper[5110]: E0130 00:14:12.912228 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.4121797 +0000 UTC m=+115.370415829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:12 crc kubenswrapper[5110]: I0130 00:14:12.984015 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podStartSLOduration=92.983983374 podStartE2EDuration="1m32.983983374s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:12.961252037 +0000 UTC m=+114.919488166" watchObservedRunningTime="2026-01-30 00:14:12.983983374 +0000 UTC m=+114.942219503" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.015796 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.016288 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.516267021 +0000 UTC m=+115.474503150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.094816 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29495520-r6lp4" podStartSLOduration=94.094692019 podStartE2EDuration="1m34.094692019s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.093558859 +0000 UTC m=+115.051794988" watchObservedRunningTime="2026-01-30 00:14:13.094692019 +0000 UTC m=+115.052928148" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.106850 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.116836 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:13 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:13 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:13 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.116938 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.118144 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.118674 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.618641377 +0000 UTC m=+115.576877496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.120634 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" podStartSLOduration=93.120611029 podStartE2EDuration="1m33.120611029s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.117035555 +0000 UTC m=+115.075271684" watchObservedRunningTime="2026-01-30 00:14:13.120611029 +0000 UTC m=+115.078847148" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.142834 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" podStartSLOduration=94.142812991 podStartE2EDuration="1m34.142812991s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.140414578 +0000 UTC m=+115.098650707" watchObservedRunningTime="2026-01-30 00:14:13.142812991 +0000 UTC m=+115.101049130" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.219776 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.219996 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.719964335 +0000 UTC m=+115.678200454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.220367 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.221010 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.721004123 +0000 UTC m=+115.679240252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.321770 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.322662 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.822621159 +0000 UTC m=+115.780857288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.424620 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.425189 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:13.925108218 +0000 UTC m=+115.883344347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.490915 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" event={"ID":"33877551-c042-4fde-bf15-c4d58e9c3321","Type":"ContainerStarted","Data":"592488866ca4e5f534c7f77424d02563b04073bf107808faad33bd2420b56402"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.518146 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerStarted","Data":"5c694b2e34bee137a9a92b625c41e7da7d318ea0fc9589befde5488183ef71b3"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.518481 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.523005 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gd6hl" podStartSLOduration=93.522980975 podStartE2EDuration="1m33.522980975s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.51669619 +0000 UTC m=+115.474932339" watchObservedRunningTime="2026-01-30 00:14:13.522980975 +0000 UTC m=+115.481217104" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.525944 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.527163 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.027144625 +0000 UTC m=+115.985380754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.539744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" event={"ID":"1fcecaab-3109-4c05-a95a-2e78bf76b2df","Type":"ContainerStarted","Data":"3d259d61f0ec2da2a7ab85f7781ae07a212a017c3d9f4d350086a32f491f6bdf"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.546692 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" podStartSLOduration=93.546671837 podStartE2EDuration="1m33.546671837s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.544310795 +0000 UTC m=+115.502546924" watchObservedRunningTime="2026-01-30 00:14:13.546671837 +0000 UTC m=+115.504908106" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.562178 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-kxkkt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.562244 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.571244 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-tkjk4" podStartSLOduration=93.571229331 podStartE2EDuration="1m33.571229331s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.571137439 +0000 UTC m=+115.529373568" watchObservedRunningTime="2026-01-30 00:14:13.571229331 +0000 UTC m=+115.529465460" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.593420 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tm4w8" event={"ID":"6b110753-9089-443c-afc4-462284914075","Type":"ContainerStarted","Data":"9c6ffaf9de3f0d2613205a1c9922c747e5e26e7f9aab0f808e2d2e6c3f241468"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.632050 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.633634 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.133612288 +0000 UTC m=+116.091848417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.646227 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" event={"ID":"c6cb5412-1e3d-4632-96a0-48afa9db27bb","Type":"ContainerStarted","Data":"0146bca197981468716a4d595b80352518097f184550d4b1f6301d958173f5e6"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.657018 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" event={"ID":"e30d7bf6-db40-4a33-9847-b27348b08821","Type":"ContainerStarted","Data":"e97c77a256f144d2bf7ea1b7c29ef405b7d2863b34e45aeebd34d9eb2138a01a"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.684452 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" event={"ID":"fa3816c0-0f04-4410-a56c-3602c754d5c0","Type":"ContainerStarted","Data":"1e40095201a9694844d7d588027512dc15c970f2088ff13b037e4b58dbf703e3"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.684865 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sd7xt" podStartSLOduration=94.684845622 podStartE2EDuration="1m34.684845622s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.683782324 +0000 UTC m=+115.642018443" watchObservedRunningTime="2026-01-30 00:14:13.684845622 +0000 UTC m=+115.643081761" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.705107 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" event={"ID":"a6606de8-e3c8-4a97-ae81-0b526c53fc1c","Type":"ContainerStarted","Data":"7382b0a9a01e790a9e8d02c04121f959b2130e7dd4b7eac10c7042364f1cc3e1"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.718953 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6bhnt" podStartSLOduration=94.718916126 podStartE2EDuration="1m34.718916126s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.711790489 +0000 UTC m=+115.670026618" watchObservedRunningTime="2026-01-30 00:14:13.718916126 +0000 UTC m=+115.677152255" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.724234 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" event={"ID":"18461f11-f1b2-43b3-b1c1-9fc3ee55283c","Type":"ContainerStarted","Data":"f03d59d95ebec4c5f4d59d05f7c6d2f83f197347ec0f6c5dd3c9650c46702bd0"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.737589 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"b71b532be1fdb113966b13d76ccc9d5200645a7376e7bb8fa631f39c13742ebb"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.738210 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.740008 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-lccls" podStartSLOduration=93.739994659 podStartE2EDuration="1m33.739994659s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.737118924 +0000 UTC m=+115.695355053" watchObservedRunningTime="2026-01-30 00:14:13.739994659 +0000 UTC m=+115.698230778" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.740622 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.240601085 +0000 UTC m=+116.198837214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.766611 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-t2qff" event={"ID":"813993d2-50e6-4b33-9fc1-d354e519945a","Type":"ContainerStarted","Data":"bc1d7f3092ec1b1bab9a915b80cf5fecbbf788a504d316d433bfe7a881b0749e"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.789505 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wjggp" podStartSLOduration=93.789491648 podStartE2EDuration="1m33.789491648s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.767695836 +0000 UTC m=+115.725931965" watchObservedRunningTime="2026-01-30 00:14:13.789491648 +0000 UTC m=+115.747727767" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.798549 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"f427be88dd13136d3a2e99c160e987455c3c2de1e94b4805df0f6a4b19f54fe1"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.830846 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-t2qff" podStartSLOduration=93.830826912 podStartE2EDuration="1m33.830826912s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:13.792198869 +0000 UTC m=+115.750434998" watchObservedRunningTime="2026-01-30 00:14:13.830826912 +0000 UTC m=+115.789063041" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.838465 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" event={"ID":"eed6c878-a138-4828-adc1-2dea6827fc2b","Type":"ContainerStarted","Data":"20af947866c21a99987d21d7316096eb2c3c925f58053ba10a052f09a0206777"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.848231 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.850181 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.3501635 +0000 UTC m=+116.308399619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.858592 5110 generic.go:358] "Generic (PLEG): container finished" podID="0f4aee94-d32d-43e7-93b1-40c3a05ed8ef" containerID="9f474a22cb608bcd81077bb1250c643c8eb2fd46afe04aa77a598ee2abe6f34a" exitCode=0 Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.858666 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" event={"ID":"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef","Type":"ContainerDied","Data":"9f474a22cb608bcd81077bb1250c643c8eb2fd46afe04aa77a598ee2abe6f34a"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.878630 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" event={"ID":"327eaa18-356c-4a5b-a6e2-a6cea319d8cb","Type":"ContainerStarted","Data":"9a3f49ff0935d36732eea157b54616e6cb59419b65f72e72e40f1fb642eab4db"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.949887 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:13 crc kubenswrapper[5110]: E0130 00:14:13.951788 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.451757625 +0000 UTC m=+116.409993754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.959011 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" event={"ID":"86c4232c-55ee-4511-a00e-eea5740d1a68","Type":"ContainerStarted","Data":"7771f5820e4318eacd6412bd04815a29c990743c1286ea30ce0d47df39f8d4fa"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.959060 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" event={"ID":"86c4232c-55ee-4511-a00e-eea5740d1a68","Type":"ContainerStarted","Data":"a06a0ff5501c29e61eddb45c303696e7e7c5400cbc6d84cb687f5ccd2d51442a"} Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.960253 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.981016 5110 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-qpjvn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.981070 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" podUID="86c4232c-55ee-4511-a00e-eea5740d1a68" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 00:14:13 crc kubenswrapper[5110]: I0130 00:14:13.984561 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" event={"ID":"7194dc7a-97fb-44de-b577-37143c6365e8","Type":"ContainerStarted","Data":"91fea70444cfbe89b4c2cc56d09dfc3c5211fb9c3a848eb39af29bca9d146034"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.052748 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.054632 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.554616634 +0000 UTC m=+116.512852763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.107249 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tpzqq" podStartSLOduration=94.107227594 podStartE2EDuration="1m34.107227594s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.023857187 +0000 UTC m=+115.982093316" watchObservedRunningTime="2026-01-30 00:14:14.107227594 +0000 UTC m=+116.065463723" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.107513 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" podStartSLOduration=94.107508132 podStartE2EDuration="1m34.107508132s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.104891083 +0000 UTC m=+116.063127212" watchObservedRunningTime="2026-01-30 00:14:14.107508132 +0000 UTC m=+116.065744261" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.129741 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:14 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:14 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:14 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.129812 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.130463 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" event={"ID":"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40","Type":"ContainerStarted","Data":"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.130611 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.153838 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.155484 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.65545865 +0000 UTC m=+116.613694769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.177940 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwf28" event={"ID":"1fbd252e-c54f-4a19-b637-adb4d23722fc","Type":"ContainerStarted","Data":"942f0a52748cff642ff838b97f3b00ce35b66c15cd087d944e1516c8e2972906"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.178680 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" podStartSLOduration=8.178627067 podStartE2EDuration="8.178627067s" podCreationTimestamp="2026-01-30 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.17798724 +0000 UTC m=+116.136223369" watchObservedRunningTime="2026-01-30 00:14:14.178627067 +0000 UTC m=+116.136863196" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.201726 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" event={"ID":"71fe9337-e0aa-4289-93b3-9aea0bdc284b","Type":"ContainerStarted","Data":"1174768baf38b225a5dc3d28713ecc00c22381324e1f9a8d9db02630e84256cd"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.217962 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vvz9f" event={"ID":"a335e035-e28e-4f63-8c7e-97b0059d0b13","Type":"ContainerStarted","Data":"720f70974332db4d69a43c2afbe3e2e0a85fa9ba7fbee902b66e3075bded5280"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.242987 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" podStartSLOduration=94.242965634 podStartE2EDuration="1m34.242965634s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.241804444 +0000 UTC m=+116.200040573" watchObservedRunningTime="2026-01-30 00:14:14.242965634 +0000 UTC m=+116.201201763" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.256219 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.259502 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.759472558 +0000 UTC m=+116.717708877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.274117 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" event={"ID":"a4a311df-cb31-4526-a1fc-3a58634d5dff","Type":"ContainerStarted","Data":"fc36e72ade8a4af23698b1715c2c0af4b270f66f7df323f8e226652b31ebfcce"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.288384 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vvz9f" podStartSLOduration=8.288357515 podStartE2EDuration="8.288357515s" podCreationTimestamp="2026-01-30 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.287649357 +0000 UTC m=+116.245885486" watchObservedRunningTime="2026-01-30 00:14:14.288357515 +0000 UTC m=+116.246593644" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.293221 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.296578 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-q9fd8" event={"ID":"b1823c5b-86dc-4bbf-8964-bc19dba82794","Type":"ContainerStarted","Data":"e748e1b62bdae04b6510fbb292e4df8d87b3114b4003bc29e9bf4af7febe60a8"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.315603 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" event={"ID":"8062691e-72b4-422c-9215-b86e22d137c1","Type":"ContainerStarted","Data":"c9f664c50b13bd8b262f93c52cfb319ccb52036b13ebdef3847622a50db02359"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.336658 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tnkh5" podStartSLOduration=94.336636202 podStartE2EDuration="1m34.336636202s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.33463918 +0000 UTC m=+116.292875309" watchObservedRunningTime="2026-01-30 00:14:14.336636202 +0000 UTC m=+116.294872331" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.346633 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" event={"ID":"99ef4716-24ef-4a6b-9a9f-32a139d60aeb","Type":"ContainerStarted","Data":"93da21793e5fe1bc9f17fb6c2d76338194d775af72b9594fc9bdd07f83198fcd"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.359584 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.360907 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.860886018 +0000 UTC m=+116.819122147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.393762 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-q9fd8" podStartSLOduration=94.39374215 podStartE2EDuration="1m34.39374215s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.392749534 +0000 UTC m=+116.350985663" watchObservedRunningTime="2026-01-30 00:14:14.39374215 +0000 UTC m=+116.351978279" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.447424 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" event={"ID":"c9c50ad8-bd30-431a-80b1-290290cc1ea8","Type":"ContainerStarted","Data":"b78ae29c4b258426abcaf3a5f22dec92fd341d229bb01efdec5f483ee831f5b6"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.447666 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.460346 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" event={"ID":"39298aae-aa93-40ad-8dfc-9d5fdea9ae10","Type":"ContainerStarted","Data":"0c5da6170a72f2a208eea63f5afc2989159746faa505c0202b5c322b6b8ff6ec"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.461029 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.461394 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:14.961380235 +0000 UTC m=+116.919616364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.474742 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" podStartSLOduration=94.474723215 podStartE2EDuration="1m34.474723215s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:14.473115263 +0000 UTC m=+116.431351392" watchObservedRunningTime="2026-01-30 00:14:14.474723215 +0000 UTC m=+116.432959344" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.494120 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"5d75fad4704fdec4d63b596f22303edede08acf619db8c322a0c8202e9fe5530"} Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.505724 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-62799" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.510727 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-g5znk" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.512291 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.562121 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.562480 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.062443367 +0000 UTC m=+117.020679496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.568118 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.572062 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.072042528 +0000 UTC m=+117.030278657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.624729 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.636843 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.645351 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.670430 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.670677 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.671095 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.671140 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vqn5\" (UniqueName: \"kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.671842 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.671865 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.171833847 +0000 UTC m=+117.130069976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.774497 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vqn5\" (UniqueName: \"kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.774709 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.774734 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.774771 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.775100 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.275072525 +0000 UTC m=+117.233308644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.777606 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.777858 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.802526 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vqn5\" (UniqueName: \"kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5\") pod \"certified-operators-jbdwz\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.805825 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.820880 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.830061 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.859568 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.876844 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.877314 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.877370 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.877430 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.882964 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.382936395 +0000 UTC m=+117.341172524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.980109 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.980174 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.980207 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.980230 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:14 crc kubenswrapper[5110]: E0130 00:14:14.980541 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.480523976 +0000 UTC m=+117.438760105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.980889 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:14 crc kubenswrapper[5110]: I0130 00:14:14.981133 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.009392 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z\") pod \"community-operators-bw6vt\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.009651 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.013880 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.037158 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.038056 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.082885 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.083215 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.583198269 +0000 UTC m=+117.541434398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.109394 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zfs4d"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.116437 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:15 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:15 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:15 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.116525 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.185360 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.185420 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9765\" (UniqueName: \"kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.185451 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.185501 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.185831 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.685816182 +0000 UTC m=+117.644052311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.195616 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.199454 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.222472 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.222767 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.289046 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.289383 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9765\" (UniqueName: \"kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.289422 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.289465 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.289933 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.290155 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.290220 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.790202151 +0000 UTC m=+117.748438280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.321348 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9765\" (UniqueName: \"kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765\") pod \"certified-operators-8r5gk\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.378664 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.378976 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45720: no serving certificate available for the kubelet" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.394777 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qrhd\" (UniqueName: \"kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.394838 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.394857 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.394917 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.395369 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:15.895352159 +0000 UTC m=+117.853588288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.499664 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.500130 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.000105068 +0000 UTC m=+117.958341197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.500623 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.500826 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qrhd\" (UniqueName: \"kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.500907 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.500934 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.501379 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.501643 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.001634678 +0000 UTC m=+117.959870807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.502320 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.509872 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45722: no serving certificate available for the kubelet" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.523516 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" event={"ID":"7194dc7a-97fb-44de-b577-37143c6365e8","Type":"ContainerStarted","Data":"54eb1b842ba6580806e357331b69e6175f7679e02f1542f0926b79eed9686296"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.530146 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qrhd\" (UniqueName: \"kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd\") pod \"community-operators-ldfsg\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.587643 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45730: no serving certificate available for the kubelet" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.592109 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" event={"ID":"f7a48a07-3ab8-4b38-be60-baa4f39a0757","Type":"ContainerStarted","Data":"bacb60f6fbb81c7bfeddeaaae9b29da4851e323835921901f208dea0456b8db6"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.597607 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.600976 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" event={"ID":"99861aba-0721-4a1b-9156-438f84b1480c","Type":"ContainerStarted","Data":"794b7c42ea948b9c1cecee055119681d5e55eddf3342076756be47ba9b961004"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.604705 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.605154 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.105134693 +0000 UTC m=+118.063370822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.610198 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwf28" event={"ID":"1fbd252e-c54f-4a19-b637-adb4d23722fc","Type":"ContainerStarted","Data":"f29e5e943f37b603ad0ce2186370e78d5e9a33d21a1aa74a20a7d22f7cad96d9"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.636223 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-ct5gc" podStartSLOduration=95.636199118 podStartE2EDuration="1m35.636199118s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:15.626228307 +0000 UTC m=+117.584464436" watchObservedRunningTime="2026-01-30 00:14:15.636199118 +0000 UTC m=+117.594435247" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.661598 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-s8j52" event={"ID":"71fe9337-e0aa-4289-93b3-9aea0bdc284b","Type":"ContainerStarted","Data":"724812aa96b0e798a07315eecf13017b7cd14b695f2172a36ab494f482c0fdda"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.664697 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" podStartSLOduration=96.664672655 podStartE2EDuration="1m36.664672655s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:15.662731015 +0000 UTC m=+117.620967144" watchObservedRunningTime="2026-01-30 00:14:15.664672655 +0000 UTC m=+117.622908784" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.695531 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.698177 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" event={"ID":"8062691e-72b4-422c-9215-b86e22d137c1","Type":"ContainerStarted","Data":"35509b7782d8ac718e19aefb0e911cedb1c2f57aaada66bd24dbbef0449d0438"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.698229 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" event={"ID":"8062691e-72b4-422c-9215-b86e22d137c1","Type":"ContainerStarted","Data":"2acbaa55fc6f9e0f3966b95c20cf05c899e1ad8d1a26291b762fd8744e2be720"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.698534 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45746: no serving certificate available for the kubelet" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.711120 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.712153 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.212139531 +0000 UTC m=+118.170375660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.721502 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" event={"ID":"99ef4716-24ef-4a6b-9a9f-32a139d60aeb","Type":"ContainerStarted","Data":"f464913893cbbdf3be699acfee91039a864bcaa4ec362f11f4fb4cf78432abcb"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.732299 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-bhqzk" podStartSLOduration=95.732284019 podStartE2EDuration="1m35.732284019s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:15.730251476 +0000 UTC m=+117.688487605" watchObservedRunningTime="2026-01-30 00:14:15.732284019 +0000 UTC m=+117.690520138" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.748360 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" event={"ID":"0588f21c-6055-4426-a75e-6e581b2f8b59","Type":"ContainerStarted","Data":"cd4ab5889fc8a4eb0f00f8d8245f60599e06f694de911c0d4e6e6faf61f06dc3"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.749513 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.757350 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"0302fde5484db2ce00b3d5c1154f55b8ae93dc122ab8c06fa83118dda6dcd49c"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.768680 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-r2msw" podStartSLOduration=95.76853109 podStartE2EDuration="1m35.76853109s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:15.755201001 +0000 UTC m=+117.713437130" watchObservedRunningTime="2026-01-30 00:14:15.76853109 +0000 UTC m=+117.726767209" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.771770 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.775293 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tm4w8" event={"ID":"6b110753-9089-443c-afc4-462284914075","Type":"ContainerStarted","Data":"b6fda2c8ebd5d0448918916a566553b11ddbf4a2e82a9e28a3ff5bda6362b796"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.775343 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tm4w8" event={"ID":"6b110753-9089-443c-afc4-462284914075","Type":"ContainerStarted","Data":"9d49991cd53de3232f67b647e59d017cc948ca9526c9384bf67c025695e007b4"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.781253 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" podStartSLOduration=95.781239514 podStartE2EDuration="1m35.781239514s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:15.780004341 +0000 UTC m=+117.738240470" watchObservedRunningTime="2026-01-30 00:14:15.781239514 +0000 UTC m=+117.739475643" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.789633 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45758: no serving certificate available for the kubelet" Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.795744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" event={"ID":"2a59f2db-3db1-423c-9b1e-287aded6f8c7","Type":"ContainerStarted","Data":"e1063fbda5ef79c321f398b29deb2e7ff946ae49578cc1ab8550a7c9fb9791c0"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.795785 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" event={"ID":"2a59f2db-3db1-423c-9b1e-287aded6f8c7","Type":"ContainerStarted","Data":"8a1ea416f2099f717283dbbb2900cc79d589523b74f2d88a467b99eddc8db013"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.814105 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" event={"ID":"c6cb5412-1e3d-4632-96a0-48afa9db27bb","Type":"ContainerStarted","Data":"dca35ea7b295f95f772fcec063203fd47959bb30fe367f32aaa693afe18ef1b1"} Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.814554 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.814706 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.314676391 +0000 UTC m=+118.272912510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.815195 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.826898 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.326881511 +0000 UTC m=+118.285117640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.852707 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.917416 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.917695 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.417665473 +0000 UTC m=+118.375901602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.918125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:15 crc kubenswrapper[5110]: E0130 00:14:15.918490 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.418483815 +0000 UTC m=+118.376719944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:15 crc kubenswrapper[5110]: I0130 00:14:15.966466 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45766: no serving certificate available for the kubelet" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.023395 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.023851 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.523833399 +0000 UTC m=+118.482069518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073768 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073829 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" event={"ID":"8c0e8717-0f55-4a74-8ac6-086c3267e836","Type":"ContainerStarted","Data":"a863c512a6e92afba85dc35d30a3b7002579ee34235367dce03bd60a6367fa1b"} Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073869 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073892 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073904 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"556488178b79a0855c8ca86a075abb9fc7e30b2191fc5baa0ab33941ee836e29"} Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073917 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"ac2ae5f4fac0391870f1bc529de66facbdb48ef239567356d75629233daefe92"} Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.073939 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" event={"ID":"eed6c878-a138-4828-adc1-2dea6827fc2b","Type":"ContainerStarted","Data":"6776af0a3e87bbb7aeda914707b8270f70c9771b0baa4627da947f4f899a9bfd"} Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.074195 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.094906 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.095261 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.096837 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45772: no serving certificate available for the kubelet" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.121865 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tm4w8" podStartSLOduration=10.12183611 podStartE2EDuration="10.12183611s" podCreationTimestamp="2026-01-30 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:16.095639903 +0000 UTC m=+118.053876032" watchObservedRunningTime="2026-01-30 00:14:16.12183611 +0000 UTC m=+118.080072259" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.125677 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.127530 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:16 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:16 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:16 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.127605 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.128966 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.628951257 +0000 UTC m=+118.587187386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.130496 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" event={"ID":"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef","Type":"ContainerStarted","Data":"8250038b5dec44ffb47fa70e4350025148735cbd3005e55daad6f01d2103fb4c"} Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.152845 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.161704 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" podStartSLOduration=96.161686946 podStartE2EDuration="1m36.161686946s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:16.15195988 +0000 UTC m=+118.110196009" watchObservedRunningTime="2026-01-30 00:14:16.161686946 +0000 UTC m=+118.119923075" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.191144 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.196159 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.202822 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-qpjvn" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.218415 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-tkt8c" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.218935 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45778: no serving certificate available for the kubelet" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.226747 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.227034 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.227077 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.227194 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.727171474 +0000 UTC m=+118.685407593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.266187 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lf7lb" podStartSLOduration=96.266168997 podStartE2EDuration="1m36.266168997s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:16.264931554 +0000 UTC m=+118.223167683" watchObservedRunningTime="2026-01-30 00:14:16.266168997 +0000 UTC m=+118.224405126" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.328880 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.328959 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.329180 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.332182 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.832162478 +0000 UTC m=+118.790398607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.344478 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.432407 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.433102 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.433325 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:16.933306722 +0000 UTC m=+118.891542851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.453620 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j5gfn" podStartSLOduration=96.453595624 podStartE2EDuration="1m36.453595624s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:16.425536198 +0000 UTC m=+118.383772327" watchObservedRunningTime="2026-01-30 00:14:16.453595624 +0000 UTC m=+118.411831753" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.469130 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.535451 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.535928 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.035910574 +0000 UTC m=+118.994146703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.572750 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-5fsr2" podStartSLOduration=96.5727317 podStartE2EDuration="1m36.5727317s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:16.528359966 +0000 UTC m=+118.486596095" watchObservedRunningTime="2026-01-30 00:14:16.5727317 +0000 UTC m=+118.530967829" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.639106 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.639382 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.139358008 +0000 UTC m=+119.097594137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.650771 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.667214 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.673410 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.676298 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.741624 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.741697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.741750 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.741785 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9qp\" (UniqueName: \"kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.742188 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.242172976 +0000 UTC m=+119.200409105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.742408 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.756051 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ln5fr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.756132 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" podUID="0588f21c-6055-4426-a75e-6e581b2f8b59" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.843037 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.843393 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.843453 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9qp\" (UniqueName: \"kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.843536 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.844263 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.344230613 +0000 UTC m=+119.302466732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.844583 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.845042 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.866717 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9qp\" (UniqueName: \"kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp\") pod \"redhat-marketplace-8l6l9\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.950413 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:16 crc kubenswrapper[5110]: E0130 00:14:16.950960 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.450934793 +0000 UTC m=+119.409170912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:16 crc kubenswrapper[5110]: I0130 00:14:16.991856 5110 ???:1] "http: TLS handshake error from 192.168.126.11:51270: no serving certificate available for the kubelet" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.015062 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.028178 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.032354 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.040137 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.059844 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.060893 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.560874767 +0000 UTC m=+119.519110896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.123589 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:17 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:17 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:17 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.123657 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.163323 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2p8\" (UniqueName: \"kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.163406 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.163434 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.163476 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.164459 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.664439455 +0000 UTC m=+119.622675584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.200542 5110 generic.go:358] "Generic (PLEG): container finished" podID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerID="c087ecfda238bc31ff77ddcb9a405db1c7a7fa452b6a2e20bce155584c9ba5cd" exitCode=0 Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.200922 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerDied","Data":"c087ecfda238bc31ff77ddcb9a405db1c7a7fa452b6a2e20bce155584c9ba5cd"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.201004 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerStarted","Data":"1d559c8131b65830d407465c5f1a3e352213683d387841cbfa6f18ffd4ad3e63"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.212870 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" event={"ID":"0f4aee94-d32d-43e7-93b1-40c3a05ed8ef","Type":"ContainerStarted","Data":"966d01f2d730b930c79c8bd1e9063114ab7067969bcd59e512b2b9da17a878cf"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.237781 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" event={"ID":"7194dc7a-97fb-44de-b577-37143c6365e8","Type":"ContainerStarted","Data":"7e5ad1dde7620caefff8bf0fe7ffc0ee4b94621e36b0620581935d2e9a513165"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.240389 5110 generic.go:358] "Generic (PLEG): container finished" podID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerID="f7a779482691fbd290c2f2885bdd9ad8049e37869b33a02b3c115901ea8123c5" exitCode=0 Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.240467 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerDied","Data":"f7a779482691fbd290c2f2885bdd9ad8049e37869b33a02b3c115901ea8123c5"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.240511 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerStarted","Data":"4e3fea56767ecb5a510d10de9dff8d67bb7689c9e1baceef8b1249d94784b737"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.269148 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.269483 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.269527 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.269569 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2p8\" (UniqueName: \"kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.270013 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.769995994 +0000 UTC m=+119.728232123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.271050 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.271421 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.283970 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.291525 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.292245 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.331048 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" podStartSLOduration=98.331031476 podStartE2EDuration="1m38.331031476s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:17.327970745 +0000 UTC m=+119.286206874" watchObservedRunningTime="2026-01-30 00:14:17.331031476 +0000 UTC m=+119.289267605" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.336522 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2p8\" (UniqueName: \"kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8\") pod \"redhat-marketplace-4cxlz\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.337089 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwf28" event={"ID":"1fbd252e-c54f-4a19-b637-adb4d23722fc","Type":"ContainerStarted","Data":"7e1556315f5e12e100ef682f513d55c8f49ab59e41387d52219390b99a0ab59c"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.364663 5110 generic.go:358] "Generic (PLEG): container finished" podID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerID="29686b3a7804cadf449422f4901fa2f9dc2a71e00f9e847e55ddce0fe5978885" exitCode=0 Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.364840 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerDied","Data":"29686b3a7804cadf449422f4901fa2f9dc2a71e00f9e847e55ddce0fe5978885"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.364877 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerStarted","Data":"01b90203016c1a23c3ab1c7282035c0fc264427200b5369541dc1a34652e3a70"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.370773 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.373809 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.873780487 +0000 UTC m=+119.832016766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.383914 5110 generic.go:358] "Generic (PLEG): container finished" podID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerID="c1dbe5229f434468ca1d8cd51e2bb610e3374a508574d476e66e56ffb9ef524e" exitCode=0 Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.390781 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerDied","Data":"c1dbe5229f434468ca1d8cd51e2bb610e3374a508574d476e66e56ffb9ef524e"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.390861 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerStarted","Data":"d99820b18f7ed5cfbc8928a61c97abaf7483cfa9a91644a4279e5acabcc99733"} Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.397893 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" gracePeriod=30 Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.398192 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.424848 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-dvkqm" podStartSLOduration=97.424829426 podStartE2EDuration="1m37.424829426s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:17.420888743 +0000 UTC m=+119.379124872" watchObservedRunningTime="2026-01-30 00:14:17.424829426 +0000 UTC m=+119.383065555" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.427637 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ln5fr" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.451831 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.473750 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.475380 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:17.975356292 +0000 UTC m=+119.933592421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.485380 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:17 crc kubenswrapper[5110]: W0130 00:14:17.503662 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode1c3f20d_618f_4206_8bbe_c2090a753c39.slice/crio-b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a WatchSource:0}: Error finding container b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a: Status 404 returned error can't find the container with id b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.557972 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vwf28" podStartSLOduration=98.557953189 podStartE2EDuration="1m38.557953189s" podCreationTimestamp="2026-01-30 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:17.553473082 +0000 UTC m=+119.511709211" watchObservedRunningTime="2026-01-30 00:14:17.557953189 +0000 UTC m=+119.516189318" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.586366 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.586748 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.086735714 +0000 UTC m=+120.044971833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.647162 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=39.647142859 podStartE2EDuration="39.647142859s" podCreationTimestamp="2026-01-30 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:17.623385156 +0000 UTC m=+119.581621295" watchObservedRunningTime="2026-01-30 00:14:17.647142859 +0000 UTC m=+119.605378988" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.694061 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.694729 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.194700747 +0000 UTC m=+120.152936866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.797749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.798678 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.298661984 +0000 UTC m=+120.256898113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.840361 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.861712 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.865734 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.868209 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.875854 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:14:17 crc kubenswrapper[5110]: I0130 00:14:17.900148 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:17 crc kubenswrapper[5110]: E0130 00:14:17.900772 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.400749212 +0000 UTC m=+120.358985341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.003523 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.003622 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.003697 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8dg\" (UniqueName: \"kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.003750 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.004142 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.504125524 +0000 UTC m=+120.462361653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.061913 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:14:18 crc kubenswrapper[5110]: W0130 00:14:18.093364 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe22ec96_01a6_4653_bf07_8fe0a61baf24.slice/crio-c63d7adb8f0f10103335f935cabdb40288bb59e5e02c0ee864666504e6110095 WatchSource:0}: Error finding container c63d7adb8f0f10103335f935cabdb40288bb59e5e02c0ee864666504e6110095: Status 404 returned error can't find the container with id c63d7adb8f0f10103335f935cabdb40288bb59e5e02c0ee864666504e6110095 Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.104890 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.105079 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.605030592 +0000 UTC m=+120.563266721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.105699 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.105919 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.106190 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.106360 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8dg\" (UniqueName: \"kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.106671 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.106738 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.606722516 +0000 UTC m=+120.564958645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.106873 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.117806 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:18 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:18 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:18 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.117885 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.139258 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8dg\" (UniqueName: \"kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg\") pod \"redhat-operators-dksh5\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.191195 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.207491 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.207813 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.707795618 +0000 UTC m=+120.666031737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.222419 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.227144 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.240677 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.309371 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.309799 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.809777804 +0000 UTC m=+120.768013933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.361173 5110 ???:1] "http: TLS handshake error from 192.168.126.11:51286: no serving certificate available for the kubelet" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.410260 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.410557 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.910530287 +0000 UTC m=+120.868766416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.410890 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.411179 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.411345 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.411467 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8tt\" (UniqueName: \"kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.411919 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:18.911901923 +0000 UTC m=+120.870138052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.446301 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e1c3f20d-618f-4206-8bbe-c2090a753c39","Type":"ContainerStarted","Data":"b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a"} Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.484746 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerStarted","Data":"c63d7adb8f0f10103335f935cabdb40288bb59e5e02c0ee864666504e6110095"} Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.497741 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerStarted","Data":"30c3dea5021f7d0321fdd82b3231c35bf8451d803701bab429b57d7bf9e0c2eb"} Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.500263 5110 generic.go:358] "Generic (PLEG): container finished" podID="99861aba-0721-4a1b-9156-438f84b1480c" containerID="794b7c42ea948b9c1cecee055119681d5e55eddf3342076756be47ba9b961004" exitCode=0 Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.500663 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" event={"ID":"99861aba-0721-4a1b-9156-438f84b1480c","Type":"ContainerDied","Data":"794b7c42ea948b9c1cecee055119681d5e55eddf3342076756be47ba9b961004"} Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.515107 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.515381 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.515434 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8tt\" (UniqueName: \"kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.515457 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.515896 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.515973 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.015955803 +0000 UTC m=+120.974191932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.517569 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.544772 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8tt\" (UniqueName: \"kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt\") pod \"redhat-operators-2f6cp\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.554642 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.617258 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.622383 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.122365015 +0000 UTC m=+121.080601134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.718002 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.718702 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.218684152 +0000 UTC m=+121.176920271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.820962 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.821442 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.321427748 +0000 UTC m=+121.279663877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.922434 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.925504 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.425473268 +0000 UTC m=+121.383709397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:18 crc kubenswrapper[5110]: I0130 00:14:18.939603 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:18 crc kubenswrapper[5110]: E0130 00:14:18.940806 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.44079062 +0000 UTC m=+121.399026749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.006852 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.043248 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.043582 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.543560876 +0000 UTC m=+121.501797005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.118742 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:19 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:19 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:19 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.118808 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.152386 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.152780 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.652764781 +0000 UTC m=+121.611000910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.254971 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.255448 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.755425235 +0000 UTC m=+121.713661364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.357273 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.357694 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.857681247 +0000 UTC m=+121.815917376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.367934 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-k8w5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.368018 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-k8w5p" podUID="1f225323-7f5a-46bf-a9a3-1093d025b0b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.398054 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.458888 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.459151 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:19.959129099 +0000 UTC m=+121.917365228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.518022 5110 generic.go:358] "Generic (PLEG): container finished" podID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerID="77275dd151930822d05a8ad3719cbe61b8509347ef73600f71ead598321c825a" exitCode=0 Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.518588 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerDied","Data":"77275dd151930822d05a8ad3719cbe61b8509347ef73600f71ead598321c825a"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.520968 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerStarted","Data":"989c5e9e22814338a65ca7e4404590eb04ac7f9847a7f90b33569f10a0e0cf68"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.523881 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" event={"ID":"39298aae-aa93-40ad-8dfc-9d5fdea9ae10","Type":"ContainerStarted","Data":"477f91c9af968f3badeb36cd73c044e5536fce59e11a432c87c53f3805d976bd"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.532921 5110 generic.go:358] "Generic (PLEG): container finished" podID="e1c3f20d-618f-4206-8bbe-c2090a753c39" containerID="d5e2d9b3b8f67151c4109ea89f4ebbbb677f1e047a3dadf12ce4a119502d3933" exitCode=0 Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.533009 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e1c3f20d-618f-4206-8bbe-c2090a753c39","Type":"ContainerDied","Data":"d5e2d9b3b8f67151c4109ea89f4ebbbb677f1e047a3dadf12ce4a119502d3933"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.536974 5110 generic.go:358] "Generic (PLEG): container finished" podID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerID="755b3e7f7521811c707410a99d60ebdea6be3ad4440600825acb51c1507f5a16" exitCode=0 Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.537029 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerDied","Data":"755b3e7f7521811c707410a99d60ebdea6be3ad4440600825acb51c1507f5a16"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.542116 5110 generic.go:358] "Generic (PLEG): container finished" podID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerID="214ead4c0c332a5ed61cfb3d10f9817595c4979b3f119038a17700bb2ee06e07" exitCode=0 Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.542612 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerDied","Data":"214ead4c0c332a5ed61cfb3d10f9817595c4979b3f119038a17700bb2ee06e07"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.542657 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerStarted","Data":"e682ae88aa4f1035781244adc67c82d9b1709f3cbae4422e9539d7c6ae97a675"} Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.562722 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.564611 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.064587906 +0000 UTC m=+122.022824035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.665257 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.665575 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.165540515 +0000 UTC m=+122.123776644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.732682 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.732762 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.768397 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.769655 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.269640746 +0000 UTC m=+122.227876875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.852923 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.869287 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume\") pod \"99861aba-0721-4a1b-9156-438f84b1480c\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.869441 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw7bd\" (UniqueName: \"kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd\") pod \"99861aba-0721-4a1b-9156-438f84b1480c\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.869524 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume\") pod \"99861aba-0721-4a1b-9156-438f84b1480c\" (UID: \"99861aba-0721-4a1b-9156-438f84b1480c\") " Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.869688 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.870085 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.370063201 +0000 UTC m=+122.328299330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.870583 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume" (OuterVolumeSpecName: "config-volume") pod "99861aba-0721-4a1b-9156-438f84b1480c" (UID: "99861aba-0721-4a1b-9156-438f84b1480c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.886607 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd" (OuterVolumeSpecName: "kube-api-access-cw7bd") pod "99861aba-0721-4a1b-9156-438f84b1480c" (UID: "99861aba-0721-4a1b-9156-438f84b1480c"). InnerVolumeSpecName "kube-api-access-cw7bd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.890849 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.890883 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.895451 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "99861aba-0721-4a1b-9156-438f84b1480c" (UID: "99861aba-0721-4a1b-9156-438f84b1480c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.915966 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-q9fd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.916036 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-q9fd8" podUID="b1823c5b-86dc-4bbf-8964-bc19dba82794" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.976753 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.978518 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99861aba-0721-4a1b-9156-438f84b1480c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.978543 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cw7bd\" (UniqueName: \"kubernetes.io/projected/99861aba-0721-4a1b-9156-438f84b1480c-kube-api-access-cw7bd\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:19 crc kubenswrapper[5110]: I0130 00:14:19.978555 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99861aba-0721-4a1b-9156-438f84b1480c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:19 crc kubenswrapper[5110]: E0130 00:14:19.979069 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.47904722 +0000 UTC m=+122.437283349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.080962 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.081144 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.581106348 +0000 UTC m=+122.539342477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.081867 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.082230 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.582217767 +0000 UTC m=+122.540453896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.107064 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.110134 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:20 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:20 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:20 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.110201 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.183770 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.183961 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.683929025 +0000 UTC m=+122.642165154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.184472 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.185151 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.685132897 +0000 UTC m=+122.643369026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.288391 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.288674 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.788654443 +0000 UTC m=+122.746890572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.313083 5110 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-4qlhj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]log ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]etcd ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/max-in-flight-filter ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 00:14:20 crc kubenswrapper[5110]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 00:14:20 crc kubenswrapper[5110]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 00:14:20 crc kubenswrapper[5110]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 00:14:20 crc kubenswrapper[5110]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:14:20 crc kubenswrapper[5110]: livez check failed Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.313181 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" podUID="0f4aee94-d32d-43e7-93b1-40c3a05ed8ef" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.390428 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.390791 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.890777223 +0000 UTC m=+122.849013352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.498710 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.498824 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.998795137 +0000 UTC m=+122.957031266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.499483 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.500017 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:20.999997218 +0000 UTC m=+122.958233347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.568103 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.568368 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-jjx65" event={"ID":"99861aba-0721-4a1b-9156-438f84b1480c","Type":"ContainerDied","Data":"c5259e8f3dd9637ce888997c3a096b3c7e662669d9a4a486b3862a6c65970f55"} Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.568420 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5259e8f3dd9637ce888997c3a096b3c7e662669d9a4a486b3862a6c65970f55" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.572418 5110 generic.go:358] "Generic (PLEG): container finished" podID="02c56d75-9b83-41cf-8126-774743923b26" containerID="ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6" exitCode=0 Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.572576 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerDied","Data":"ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6"} Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.601971 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.602270 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.10221971 +0000 UTC m=+123.060455839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.602707 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.603293 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.103118444 +0000 UTC m=+123.061354573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.704594 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.705887 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.205834279 +0000 UTC m=+123.164070408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.809535 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.810027 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.310008842 +0000 UTC m=+123.268244971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.895135 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.913179 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:20 crc kubenswrapper[5110]: E0130 00:14:20.913508 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.413488007 +0000 UTC m=+123.371724136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:20 crc kubenswrapper[5110]: I0130 00:14:20.947922 5110 ???:1] "http: TLS handshake error from 192.168.126.11:51296: no serving certificate available for the kubelet" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.014790 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir\") pod \"e1c3f20d-618f-4206-8bbe-c2090a753c39\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.015015 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access\") pod \"e1c3f20d-618f-4206-8bbe-c2090a753c39\" (UID: \"e1c3f20d-618f-4206-8bbe-c2090a753c39\") " Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.015699 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e1c3f20d-618f-4206-8bbe-c2090a753c39" (UID: "e1c3f20d-618f-4206-8bbe-c2090a753c39"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.016018 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.016200 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1c3f20d-618f-4206-8bbe-c2090a753c39-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.016591 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.516575621 +0000 UTC m=+123.474811750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.048326 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e1c3f20d-618f-4206-8bbe-c2090a753c39" (UID: "e1c3f20d-618f-4206-8bbe-c2090a753c39"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.110438 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:21 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:21 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:21 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.110509 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.117188 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.117313 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.617291964 +0000 UTC m=+123.575528093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.118435 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.118800 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1c3f20d-618f-4206-8bbe-c2090a753c39-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.118895 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.618870945 +0000 UTC m=+123.577107074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.220323 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.220535 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.720497872 +0000 UTC m=+123.678733991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.221066 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.221411 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.721403686 +0000 UTC m=+123.679639815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.322737 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.323305 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.823252137 +0000 UTC m=+123.781488266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.353710 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354604 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1c3f20d-618f-4206-8bbe-c2090a753c39" containerName="pruner" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354620 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c3f20d-618f-4206-8bbe-c2090a753c39" containerName="pruner" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354630 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99861aba-0721-4a1b-9156-438f84b1480c" containerName="collect-profiles" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354638 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="99861aba-0721-4a1b-9156-438f84b1480c" containerName="collect-profiles" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354869 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="99861aba-0721-4a1b-9156-438f84b1480c" containerName="collect-profiles" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.354880 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1c3f20d-618f-4206-8bbe-c2090a753c39" containerName="pruner" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.364634 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.364799 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.369958 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.370635 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.424580 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.424994 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:21.924970856 +0000 UTC m=+123.883206985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.526392 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.526594 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.026562591 +0000 UTC m=+123.984798720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.526706 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.526903 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.527071 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.027054584 +0000 UTC m=+123.985290713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.527108 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.591907 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e1c3f20d-618f-4206-8bbe-c2090a753c39","Type":"ContainerDied","Data":"b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a"} Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.591954 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ba84b7c550e9018e12da4fccc601d7567530bb6b6ac3ca49400630506ec29a" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.592040 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.627848 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.628052 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.128020553 +0000 UTC m=+124.086256682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.628610 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.628789 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.628990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.629177 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.629252 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.129226104 +0000 UTC m=+124.087462233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.651565 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.724473 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.729961 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.730231 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.230195624 +0000 UTC m=+124.188431753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.730496 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.730914 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.230898382 +0000 UTC m=+124.189134511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.832426 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.832636 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.332603301 +0000 UTC m=+124.290839430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.833063 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.833401 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.333387811 +0000 UTC m=+124.291623940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.934111 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.934318 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.434278048 +0000 UTC m=+124.392514177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.934483 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:21 crc kubenswrapper[5110]: E0130 00:14:21.935198 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.435191922 +0000 UTC m=+124.393428051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:21 crc kubenswrapper[5110]: I0130 00:14:21.994395 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.036907 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.037464 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.537431984 +0000 UTC m=+124.495668113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.038001 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.038408 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.53839021 +0000 UTC m=+124.496626329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.109962 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:22 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:22 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:22 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.110054 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.140413 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.140654 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.640621012 +0000 UTC m=+124.598857141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.141048 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.143482 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.643463026 +0000 UTC m=+124.601699155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.242047 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.242161 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.742131815 +0000 UTC m=+124.700367944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.242464 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.242920 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.742895785 +0000 UTC m=+124.701131914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.343689 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.344208 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.844165182 +0000 UTC m=+124.802401311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.434670 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-k8w5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.434762 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-k8w5p" podUID="1f225323-7f5a-46bf-a9a3-1093d025b0b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.453277 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.453781 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:22.953761918 +0000 UTC m=+124.911998047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.554977 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.555171 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.055141868 +0000 UTC m=+125.013377997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.555812 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.556290 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.056271927 +0000 UTC m=+125.014508056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.601635 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" event={"ID":"39298aae-aa93-40ad-8dfc-9d5fdea9ae10","Type":"ContainerStarted","Data":"b8a69073ca8cf646963ae9cc3dc51d1b984729d6faaf9c224aea5d438cf4e361"} Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.603561 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714","Type":"ContainerStarted","Data":"b50bf2f8204cecb71eb2bfbd2492be8f2d84b917b061bf8fe920296a7ab98aa4"} Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.657823 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.658051 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.158018147 +0000 UTC m=+125.116254276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.658455 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.658862 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.158853749 +0000 UTC m=+125.117089878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.760756 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.761107 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.261083671 +0000 UTC m=+125.219319790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.862484 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.862955 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.362933273 +0000 UTC m=+125.321169402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.964219 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.964458 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.464424646 +0000 UTC m=+125.422660775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:22 crc kubenswrapper[5110]: I0130 00:14:22.964621 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:22 crc kubenswrapper[5110]: E0130 00:14:22.965348 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:14:23.465307839 +0000 UTC m=+125.423543968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nh26b" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.031807 5110 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.046963 5110 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T00:14:23.032012429Z","UUID":"91a1dc64-bffe-4523-b17d-eb4ed12967dd","Handler":null,"Name":"","Endpoint":""} Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.056174 5110 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.056221 5110 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.066958 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.072309 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.110497 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:23 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.110575 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.169089 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.256713 5110 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.256794 5110 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.463363 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nh26b\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.609300 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:14:23 crc kubenswrapper[5110]: I0130 00:14:23.615426 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.112590 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:24 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:24 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:24 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.112679 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:24 crc kubenswrapper[5110]: E0130 00:14:24.135324 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:24 crc kubenswrapper[5110]: E0130 00:14:24.138513 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:24 crc kubenswrapper[5110]: E0130 00:14:24.141540 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:24 crc kubenswrapper[5110]: E0130 00:14:24.141628 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.503747 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tm4w8" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.634252 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714","Type":"ContainerStarted","Data":"4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d"} Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.657859 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.657838396 podStartE2EDuration="3.657838396s" podCreationTimestamp="2026-01-30 00:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:24.652860345 +0000 UTC m=+126.611096484" watchObservedRunningTime="2026-01-30 00:14:24.657838396 +0000 UTC m=+126.616074525" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.737381 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.743060 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-4qlhj" Jan 30 00:14:24 crc kubenswrapper[5110]: I0130 00:14:24.887026 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 30 00:14:25 crc kubenswrapper[5110]: I0130 00:14:25.110647 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:25 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:25 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:25 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:25 crc kubenswrapper[5110]: I0130 00:14:25.110744 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:25 crc kubenswrapper[5110]: I0130 00:14:25.642536 5110 generic.go:358] "Generic (PLEG): container finished" podID="9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" containerID="4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d" exitCode=0 Jan 30 00:14:25 crc kubenswrapper[5110]: I0130 00:14:25.644030 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714","Type":"ContainerDied","Data":"4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d"} Jan 30 00:14:26 crc kubenswrapper[5110]: I0130 00:14:26.097857 5110 ???:1] "http: TLS handshake error from 192.168.126.11:51300: no serving certificate available for the kubelet" Jan 30 00:14:26 crc kubenswrapper[5110]: I0130 00:14:26.109292 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:26 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:26 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:26 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:26 crc kubenswrapper[5110]: I0130 00:14:26.109441 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:27 crc kubenswrapper[5110]: I0130 00:14:27.110446 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:27 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:27 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:27 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:27 crc kubenswrapper[5110]: I0130 00:14:27.111256 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:28 crc kubenswrapper[5110]: I0130 00:14:28.109143 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:28 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:28 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:28 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:28 crc kubenswrapper[5110]: I0130 00:14:28.109807 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:28 crc kubenswrapper[5110]: I0130 00:14:28.506091 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:29 crc kubenswrapper[5110]: I0130 00:14:29.118926 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:29 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:29 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:29 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:29 crc kubenswrapper[5110]: I0130 00:14:29.119955 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:29 crc kubenswrapper[5110]: I0130 00:14:29.891490 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-q9fd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 00:14:29 crc kubenswrapper[5110]: I0130 00:14:29.892666 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-q9fd8" podUID="b1823c5b-86dc-4bbf-8964-bc19dba82794" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 00:14:30 crc kubenswrapper[5110]: I0130 00:14:30.109549 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:30 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 30 00:14:30 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:30 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:30 crc kubenswrapper[5110]: I0130 00:14:30.109685 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:31 crc kubenswrapper[5110]: I0130 00:14:31.110400 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-q4bkd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:14:31 crc kubenswrapper[5110]: [+]has-synced ok Jan 30 00:14:31 crc kubenswrapper[5110]: [+]process-running ok Jan 30 00:14:31 crc kubenswrapper[5110]: healthz check failed Jan 30 00:14:31 crc kubenswrapper[5110]: I0130 00:14:31.110912 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" podUID="71abf881-27f0-4048-8f11-5585b96cf594" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:31 crc kubenswrapper[5110]: I0130 00:14:31.569843 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:14:32 crc kubenswrapper[5110]: I0130 00:14:32.111127 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:32 crc kubenswrapper[5110]: I0130 00:14:32.115998 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-q4bkd" Jan 30 00:14:32 crc kubenswrapper[5110]: I0130 00:14:32.440860 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-k8w5p" Jan 30 00:14:34 crc kubenswrapper[5110]: E0130 00:14:34.136523 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:34 crc kubenswrapper[5110]: E0130 00:14:34.143509 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:34 crc kubenswrapper[5110]: E0130 00:14:34.145767 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:34 crc kubenswrapper[5110]: E0130 00:14:34.145897 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:14:36 crc kubenswrapper[5110]: I0130 00:14:36.382899 5110 ???:1] "http: TLS handshake error from 192.168.126.11:47508: no serving certificate available for the kubelet" Jan 30 00:14:39 crc kubenswrapper[5110]: I0130 00:14:39.897033 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:39 crc kubenswrapper[5110]: I0130 00:14:39.904367 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-q9fd8" Jan 30 00:14:44 crc kubenswrapper[5110]: E0130 00:14:44.135289 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:44 crc kubenswrapper[5110]: E0130 00:14:44.137823 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:44 crc kubenswrapper[5110]: E0130 00:14:44.139314 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:14:44 crc kubenswrapper[5110]: E0130 00:14:44.139383 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:14:45 crc kubenswrapper[5110]: I0130 00:14:45.998529 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.121982 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir\") pod \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.122118 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" (UID: "9607d3aa-1a99-4e1f-a0a7-cb8da1be1714"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.122157 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access\") pod \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\" (UID: \"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714\") " Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.122493 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.129692 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" (UID: "9607d3aa-1a99-4e1f-a0a7-cb8da1be1714"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.225845 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9607d3aa-1a99-4e1f-a0a7-cb8da1be1714-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.597051 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.830674 5110 generic.go:358] "Generic (PLEG): container finished" podID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerID="da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.830802 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerDied","Data":"da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.833193 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" event={"ID":"39298aae-aa93-40ad-8dfc-9d5fdea9ae10","Type":"ContainerStarted","Data":"718b26507b70c1c7576ce15640b15a65bd533068fdde0017135f53cf8cb36805"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.836630 5110 generic.go:358] "Generic (PLEG): container finished" podID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerID="867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.836715 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerDied","Data":"867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.846443 5110 generic.go:358] "Generic (PLEG): container finished" podID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerID="dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.846572 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerDied","Data":"dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.853075 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerStarted","Data":"5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.868297 5110 generic.go:358] "Generic (PLEG): container finished" podID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerID="89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.868448 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerDied","Data":"89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.879158 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.884788 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"9607d3aa-1a99-4e1f-a0a7-cb8da1be1714","Type":"ContainerDied","Data":"b50bf2f8204cecb71eb2bfbd2492be8f2d84b917b061bf8fe920296a7ab98aa4"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.884835 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50bf2f8204cecb71eb2bfbd2492be8f2d84b917b061bf8fe920296a7ab98aa4" Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.887973 5110 generic.go:358] "Generic (PLEG): container finished" podID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerID="5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.888202 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerDied","Data":"5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.892612 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" event={"ID":"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71","Type":"ContainerStarted","Data":"fafd3489479cd29b13872e5ddb61c8899368bf195547833dd2ff21a9f40d6d4d"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.897787 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerStarted","Data":"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece"} Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.917656 5110 generic.go:358] "Generic (PLEG): container finished" podID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerID="7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15" exitCode=0 Jan 30 00:14:46 crc kubenswrapper[5110]: I0130 00:14:46.917807 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerDied","Data":"7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15"} Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.466240 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-b50bf2f8204cecb71eb2bfbd2492be8f2d84b917b061bf8fe920296a7ab98aa4": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-b50bf2f8204cecb71eb2bfbd2492be8f2d84b917b061bf8fe920296a7ab98aa4: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.466770 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-conmon-4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-conmon-4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.466797 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice/crio-4f61997df5ab7222e7808496b69907e0253d10c06c73c2889362d20a6b6e0c2d.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471315 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b966975_ecee_4596_bdc1_c92dbe87e93d.slice/crio-conmon-7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b966975_ecee_4596_bdc1_c92dbe87e93d.slice/crio-conmon-7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471394 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe22ec96_01a6_4653_bf07_8fe0a61baf24.slice/crio-conmon-dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe22ec96_01a6_4653_bf07_8fe0a61baf24.slice/crio-conmon-dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471418 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf09b72_11d4_49f3_977d_60a148c40caf.slice/crio-conmon-5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf09b72_11d4_49f3_977d_60a148c40caf.slice/crio-conmon-5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471457 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b966975_ecee_4596_bdc1_c92dbe87e93d.slice/crio-7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b966975_ecee_4596_bdc1_c92dbe87e93d.slice/crio-7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471479 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe22ec96_01a6_4653_bf07_8fe0a61baf24.slice/crio-dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe22ec96_01a6_4653_bf07_8fe0a61baf24.slice/crio-dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471501 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf09b72_11d4_49f3_977d_60a148c40caf.slice/crio-5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf09b72_11d4_49f3_977d_60a148c40caf.slice/crio-5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471523 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef72b04_6d5e_47c5_ad83_fd680d001a38.slice/crio-conmon-da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef72b04_6d5e_47c5_ad83_fd680d001a38.slice/crio-conmon-da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.471548 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc2128b_e711_46fb_8f8a_71fe2622af5d.slice/crio-conmon-867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc2128b_e711_46fb_8f8a_71fe2622af5d.slice/crio-conmon-867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472059 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef72b04_6d5e_47c5_ad83_fd680d001a38.slice/crio-da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef72b04_6d5e_47c5_ad83_fd680d001a38.slice/crio-da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472113 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3623fe3d_6a68_4c3a_9c0f_c3eb381dd958.slice/crio-conmon-5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3623fe3d_6a68_4c3a_9c0f_c3eb381dd958.slice/crio-conmon-5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472141 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc2128b_e711_46fb_8f8a_71fe2622af5d.slice/crio-867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc2128b_e711_46fb_8f8a_71fe2622af5d.slice/crio-867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472304 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3623fe3d_6a68_4c3a_9c0f_c3eb381dd958.slice/crio-5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3623fe3d_6a68_4c3a_9c0f_c3eb381dd958.slice/crio-5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472347 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c56d75_9b83_41cf_8126_774743923b26.slice/crio-conmon-e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c56d75_9b83_41cf_8126_774743923b26.slice/crio-conmon-e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472664 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c56d75_9b83_41cf_8126_774743923b26.slice/crio-e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c56d75_9b83_41cf_8126_774743923b26.slice/crio-e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472695 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b33ddbf_d5b6_42be_a4d1_978a794801eb.slice/crio-conmon-89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b33ddbf_d5b6_42be_a4d1_978a794801eb.slice/crio-conmon-89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: W0130 00:14:47.472724 5110 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b33ddbf_d5b6_42be_a4d1_978a794801eb.slice/crio-89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b33ddbf_d5b6_42be_a4d1_978a794801eb.slice/crio-89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331.scope: no such file or directory Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.537028 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zfs4d_1e56c2e9-b76d-4ffe-9af6-dd6850d11a40/kube-multus-additional-cni-plugins/0.log" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.537775 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:47 crc kubenswrapper[5110]: E0130 00:14:47.610823 5110 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod9607d3aa_1a99_4e1f_a0a7_cb8da1be1714.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e56c2e9_b76d_4ffe_9af6_dd6850d11a40.slice/crio-conmon-3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e56c2e9_b76d_4ffe_9af6_dd6850d11a40.slice/crio-3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664.scope\": RecentStats: unable to find data in memory cache]" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.653276 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready\") pod \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.653395 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzclr\" (UniqueName: \"kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr\") pod \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.653485 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir\") pod \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.653649 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist\") pod \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\" (UID: \"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40\") " Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.654377 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready" (OuterVolumeSpecName: "ready") pod "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" (UID: "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.654464 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" (UID: "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.654576 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" (UID: "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.662486 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr" (OuterVolumeSpecName: "kube-api-access-kzclr") pod "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" (UID: "1e56c2e9-b76d-4ffe-9af6-dd6850d11a40"). InnerVolumeSpecName "kube-api-access-kzclr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.755751 5110 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-ready\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.756724 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzclr\" (UniqueName: \"kubernetes.io/projected/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-kube-api-access-kzclr\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.756871 5110 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.756998 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.930156 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerStarted","Data":"14f7bcfd0285f42cde921c8b2c95b711d9197c4b60f9f0fc99905ad68265932f"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.932711 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerStarted","Data":"3abe7e1a0ebf8e2db3c06988f8757a5ef0c2a13cf351a5fc27a9890dac85006f"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.933980 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" event={"ID":"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71","Type":"ContainerStarted","Data":"ca4dcab40aef41094c6e5c4c440741457d0e9f0ef2477a1d684109e58bff866d"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.934192 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.936057 5110 generic.go:358] "Generic (PLEG): container finished" podID="02c56d75-9b83-41cf-8126-774743923b26" containerID="e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece" exitCode=0 Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.936170 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerDied","Data":"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.939276 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerStarted","Data":"783900952ff3c1f6825b389b55bc37952e3cf7632de9dcfbe66e05214cf9455b"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.940989 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zfs4d_1e56c2e9-b76d-4ffe-9af6-dd6850d11a40/kube-multus-additional-cni-plugins/0.log" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.941067 5110 generic.go:358] "Generic (PLEG): container finished" podID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" exitCode=137 Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.941105 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" event={"ID":"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40","Type":"ContainerDied","Data":"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.941158 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" event={"ID":"1e56c2e9-b76d-4ffe-9af6-dd6850d11a40","Type":"ContainerDied","Data":"3429d380c5214e63b1bcfd9eae9503c130f864cd767155e45f883f2e6c5b55c7"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.941167 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zfs4d" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.941184 5110 scope.go:117] "RemoveContainer" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.947907 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerStarted","Data":"644609d40ef8ac6c06c92c064781ab68471a42256539125499c71aaab76db2cd"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.961252 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" event={"ID":"39298aae-aa93-40ad-8dfc-9d5fdea9ae10","Type":"ContainerStarted","Data":"26459d30b2893eb48d513169be17972a9983f6b4cd5d899653df17acb4ad50e9"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.969424 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bw6vt" podStartSLOduration=4.985688103 podStartE2EDuration="33.96940139s" podCreationTimestamp="2026-01-30 00:14:14 +0000 UTC" firstStartedPulling="2026-01-30 00:14:17.201497017 +0000 UTC m=+119.159733146" lastFinishedPulling="2026-01-30 00:14:46.185210304 +0000 UTC m=+148.143446433" observedRunningTime="2026-01-30 00:14:47.963373611 +0000 UTC m=+149.921609780" watchObservedRunningTime="2026-01-30 00:14:47.96940139 +0000 UTC m=+149.927637559" Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.984880 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerStarted","Data":"b78bc9346a3d22df2d3927b657730c01f5a8df684a9c1823ca1d96fc854706a6"} Jan 30 00:14:47 crc kubenswrapper[5110]: I0130 00:14:47.998051 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ldfsg" podStartSLOduration=4.067419644 podStartE2EDuration="32.998028614s" podCreationTimestamp="2026-01-30 00:14:15 +0000 UTC" firstStartedPulling="2026-01-30 00:14:17.241141037 +0000 UTC m=+119.199377166" lastFinishedPulling="2026-01-30 00:14:46.171750007 +0000 UTC m=+148.129986136" observedRunningTime="2026-01-30 00:14:47.993941579 +0000 UTC m=+149.952177718" watchObservedRunningTime="2026-01-30 00:14:47.998028614 +0000 UTC m=+149.956264763" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.007659 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerStarted","Data":"033c3b7f9bcc112e06995ddb2ca4204a9fe4c7c5bc33be29a27ba60c4b680b8a"} Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.015018 5110 generic.go:358] "Generic (PLEG): container finished" podID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerID="5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf" exitCode=0 Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.015066 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerDied","Data":"5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf"} Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.023942 5110 scope.go:117] "RemoveContainer" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" Jan 30 00:14:48 crc kubenswrapper[5110]: E0130 00:14:48.024961 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664\": container with ID starting with 3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664 not found: ID does not exist" containerID="3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.025033 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664"} err="failed to get container status \"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664\": rpc error: code = NotFound desc = could not find container \"3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664\": container with ID starting with 3d8c686cf8b11df6c5d03d8c1dcacc187da6d5ce0c7c35b28af152b5d76bb664 not found: ID does not exist" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.039990 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8l6l9" podStartSLOduration=5.37498378 podStartE2EDuration="32.039976051s" podCreationTimestamp="2026-01-30 00:14:16 +0000 UTC" firstStartedPulling="2026-01-30 00:14:19.519084172 +0000 UTC m=+121.477320301" lastFinishedPulling="2026-01-30 00:14:46.184076403 +0000 UTC m=+148.142312572" observedRunningTime="2026-01-30 00:14:48.014753083 +0000 UTC m=+149.972989222" watchObservedRunningTime="2026-01-30 00:14:48.039976051 +0000 UTC m=+149.998212180" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.042185 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" podStartSLOduration=128.042177773 podStartE2EDuration="2m8.042177773s" podCreationTimestamp="2026-01-30 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:48.037914103 +0000 UTC m=+149.996150262" watchObservedRunningTime="2026-01-30 00:14:48.042177773 +0000 UTC m=+150.000413892" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.107466 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8r5gk" podStartSLOduration=5.307436378 podStartE2EDuration="34.107440125s" podCreationTimestamp="2026-01-30 00:14:14 +0000 UTC" firstStartedPulling="2026-01-30 00:14:17.385422563 +0000 UTC m=+119.343658692" lastFinishedPulling="2026-01-30 00:14:46.18542631 +0000 UTC m=+148.143662439" observedRunningTime="2026-01-30 00:14:48.100425308 +0000 UTC m=+150.058661437" watchObservedRunningTime="2026-01-30 00:14:48.107440125 +0000 UTC m=+150.065676254" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.133633 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jbdwz" podStartSLOduration=5.314111841 podStartE2EDuration="34.13361339s" podCreationTimestamp="2026-01-30 00:14:14 +0000 UTC" firstStartedPulling="2026-01-30 00:14:17.36588025 +0000 UTC m=+119.324116379" lastFinishedPulling="2026-01-30 00:14:46.185381799 +0000 UTC m=+148.143617928" observedRunningTime="2026-01-30 00:14:48.132925611 +0000 UTC m=+150.091161740" watchObservedRunningTime="2026-01-30 00:14:48.13361339 +0000 UTC m=+150.091849519" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.158774 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4cxlz" podStartSLOduration=5.55863984 podStartE2EDuration="32.158755326s" podCreationTimestamp="2026-01-30 00:14:16 +0000 UTC" firstStartedPulling="2026-01-30 00:14:19.53883908 +0000 UTC m=+121.497075209" lastFinishedPulling="2026-01-30 00:14:46.138954566 +0000 UTC m=+148.097190695" observedRunningTime="2026-01-30 00:14:48.157169851 +0000 UTC m=+150.115405980" watchObservedRunningTime="2026-01-30 00:14:48.158755326 +0000 UTC m=+150.116991455" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.188303 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zfs4d"] Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.192503 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zfs4d"] Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.211886 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2dmcg" podStartSLOduration=42.211864837 podStartE2EDuration="42.211864837s" podCreationTimestamp="2026-01-30 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:48.210785136 +0000 UTC m=+150.169021265" watchObservedRunningTime="2026-01-30 00:14:48.211864837 +0000 UTC m=+150.170100966" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.508124 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-52wpg" Jan 30 00:14:48 crc kubenswrapper[5110]: I0130 00:14:48.513233 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:14:49 crc kubenswrapper[5110]: I0130 00:14:49.114765 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" path="/var/lib/kubelet/pods/1e56c2e9-b76d-4ffe-9af6-dd6850d11a40/volumes" Jan 30 00:14:49 crc kubenswrapper[5110]: I0130 00:14:49.118710 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerStarted","Data":"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447"} Jan 30 00:14:49 crc kubenswrapper[5110]: I0130 00:14:49.130660 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerStarted","Data":"c064f8dd3db3b76d883381c6dcde6de5459a062ce3a6e39d706e28ac9f9b0a1e"} Jan 30 00:14:49 crc kubenswrapper[5110]: I0130 00:14:49.155690 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2f6cp" podStartSLOduration=5.477640052 podStartE2EDuration="31.155669592s" podCreationTimestamp="2026-01-30 00:14:18 +0000 UTC" firstStartedPulling="2026-01-30 00:14:20.573479856 +0000 UTC m=+122.531715985" lastFinishedPulling="2026-01-30 00:14:46.251509386 +0000 UTC m=+148.209745525" observedRunningTime="2026-01-30 00:14:49.151540676 +0000 UTC m=+151.109776815" watchObservedRunningTime="2026-01-30 00:14:49.155669592 +0000 UTC m=+151.113905721" Jan 30 00:14:49 crc kubenswrapper[5110]: I0130 00:14:49.175778 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dksh5" podStartSLOduration=5.53457041 podStartE2EDuration="32.175753696s" podCreationTimestamp="2026-01-30 00:14:17 +0000 UTC" firstStartedPulling="2026-01-30 00:14:19.543119703 +0000 UTC m=+121.501355822" lastFinishedPulling="2026-01-30 00:14:46.184302979 +0000 UTC m=+148.142539108" observedRunningTime="2026-01-30 00:14:49.175082167 +0000 UTC m=+151.133318306" watchObservedRunningTime="2026-01-30 00:14:49.175753696 +0000 UTC m=+151.133989825" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.011100 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.011896 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.196565 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.198801 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.339253 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.339401 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.380485 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.380595 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.396919 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.429621 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.598397 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.599047 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:55 crc kubenswrapper[5110]: I0130 00:14:55.646642 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:56 crc kubenswrapper[5110]: I0130 00:14:56.220257 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:14:56 crc kubenswrapper[5110]: I0130 00:14:56.231181 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:14:56 crc kubenswrapper[5110]: I0130 00:14:56.240805 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:14:56 crc kubenswrapper[5110]: I0130 00:14:56.894772 5110 ???:1] "http: TLS handshake error from 192.168.126.11:55602: no serving certificate available for the kubelet" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.029641 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.029777 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.088880 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.221924 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.374148 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.486686 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.486949 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.526502 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:57 crc kubenswrapper[5110]: I0130 00:14:57.972114 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.186739 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ldfsg" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="registry-server" containerID="cri-o://783900952ff3c1f6825b389b55bc37952e3cf7632de9dcfbe66e05214cf9455b" gracePeriod=2 Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.187255 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8r5gk" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="registry-server" containerID="cri-o://b78bc9346a3d22df2d3927b657730c01f5a8df684a9c1823ca1d96fc854706a6" gracePeriod=2 Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.195407 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.195438 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.247491 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.249568 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.352445 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353363 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" containerName="pruner" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353391 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" containerName="pruner" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353419 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353430 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353565 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9607d3aa-1a99-4e1f-a0a7-cb8da1be1714" containerName="pruner" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.353597 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e56c2e9-b76d-4ffe-9af6-dd6850d11a40" containerName="kube-multus-additional-cni-plugins" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.745348 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.745513 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.745531 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.745547 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.746018 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.749452 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.749926 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.807428 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.867687 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.867762 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.969720 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.969833 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.969884 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:58 crc kubenswrapper[5110]: I0130 00:14:58.993800 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:59 crc kubenswrapper[5110]: I0130 00:14:59.070554 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:14:59 crc kubenswrapper[5110]: I0130 00:14:59.284194 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:14:59 crc kubenswrapper[5110]: I0130 00:14:59.291901 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:14:59 crc kubenswrapper[5110]: I0130 00:14:59.577399 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:14:59 crc kubenswrapper[5110]: W0130 00:14:59.630363 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4f501faa_bbd6_44a2_b888_bbd39e59a94b.slice/crio-5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069 WatchSource:0}: Error finding container 5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069: Status 404 returned error can't find the container with id 5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069 Jan 30 00:14:59 crc kubenswrapper[5110]: I0130 00:14:59.773363 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.133539 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9"] Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.219690 5110 generic.go:358] "Generic (PLEG): container finished" podID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerID="783900952ff3c1f6825b389b55bc37952e3cf7632de9dcfbe66e05214cf9455b" exitCode=0 Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.222249 5110 generic.go:358] "Generic (PLEG): container finished" podID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerID="b78bc9346a3d22df2d3927b657730c01f5a8df684a9c1823ca1d96fc854706a6" exitCode=0 Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.400822 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9"] Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.401468 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerDied","Data":"783900952ff3c1f6825b389b55bc37952e3cf7632de9dcfbe66e05214cf9455b"} Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.401523 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerDied","Data":"b78bc9346a3d22df2d3927b657730c01f5a8df684a9c1823ca1d96fc854706a6"} Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.401548 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4f501faa-bbd6-44a2-b888-bbd39e59a94b","Type":"ContainerStarted","Data":"5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069"} Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.401563 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.405058 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.405437 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.493818 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.493907 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnxp\" (UniqueName: \"kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.493994 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.566168 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.595604 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tqnxp\" (UniqueName: \"kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.596088 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.596492 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.598617 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.614797 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.619398 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqnxp\" (UniqueName: \"kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp\") pod \"collect-profiles-29495535-pg8q9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.629414 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698197 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities\") pod \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698276 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qrhd\" (UniqueName: \"kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd\") pod \"7b966975-ecee-4596-bdc1-c92dbe87e93d\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698319 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content\") pod \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698398 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9765\" (UniqueName: \"kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765\") pod \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\" (UID: \"9cc2128b-e711-46fb-8f8a-71fe2622af5d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698460 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content\") pod \"7b966975-ecee-4596-bdc1-c92dbe87e93d\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.698510 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities\") pod \"7b966975-ecee-4596-bdc1-c92dbe87e93d\" (UID: \"7b966975-ecee-4596-bdc1-c92dbe87e93d\") " Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.699710 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities" (OuterVolumeSpecName: "utilities") pod "7b966975-ecee-4596-bdc1-c92dbe87e93d" (UID: "7b966975-ecee-4596-bdc1-c92dbe87e93d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.700261 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities" (OuterVolumeSpecName: "utilities") pod "9cc2128b-e711-46fb-8f8a-71fe2622af5d" (UID: "9cc2128b-e711-46fb-8f8a-71fe2622af5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.708986 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765" (OuterVolumeSpecName: "kube-api-access-p9765") pod "9cc2128b-e711-46fb-8f8a-71fe2622af5d" (UID: "9cc2128b-e711-46fb-8f8a-71fe2622af5d"). InnerVolumeSpecName "kube-api-access-p9765". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.709617 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd" (OuterVolumeSpecName: "kube-api-access-6qrhd") pod "7b966975-ecee-4596-bdc1-c92dbe87e93d" (UID: "7b966975-ecee-4596-bdc1-c92dbe87e93d"). InnerVolumeSpecName "kube-api-access-6qrhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.722749 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.734859 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cc2128b-e711-46fb-8f8a-71fe2622af5d" (UID: "9cc2128b-e711-46fb-8f8a-71fe2622af5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.758025 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b966975-ecee-4596-bdc1-c92dbe87e93d" (UID: "7b966975-ecee-4596-bdc1-c92dbe87e93d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801569 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801608 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801619 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6qrhd\" (UniqueName: \"kubernetes.io/projected/7b966975-ecee-4596-bdc1-c92dbe87e93d-kube-api-access-6qrhd\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801632 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc2128b-e711-46fb-8f8a-71fe2622af5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801643 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9765\" (UniqueName: \"kubernetes.io/projected/9cc2128b-e711-46fb-8f8a-71fe2622af5d-kube-api-access-p9765\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:00 crc kubenswrapper[5110]: I0130 00:15:00.801652 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b966975-ecee-4596-bdc1-c92dbe87e93d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.124657 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9"] Jan 30 00:15:01 crc kubenswrapper[5110]: W0130 00:15:01.133928 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda2d51e1_63b1_49fe_998b_c1a5b4dc8fd9.slice/crio-73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50 WatchSource:0}: Error finding container 73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50: Status 404 returned error can't find the container with id 73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50 Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.237855 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldfsg" event={"ID":"7b966975-ecee-4596-bdc1-c92dbe87e93d","Type":"ContainerDied","Data":"4e3fea56767ecb5a510d10de9dff8d67bb7689c9e1baceef8b1249d94784b737"} Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.237935 5110 scope.go:117] "RemoveContainer" containerID="783900952ff3c1f6825b389b55bc37952e3cf7632de9dcfbe66e05214cf9455b" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.238141 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldfsg" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.249908 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r5gk" event={"ID":"9cc2128b-e711-46fb-8f8a-71fe2622af5d","Type":"ContainerDied","Data":"d99820b18f7ed5cfbc8928a61c97abaf7483cfa9a91644a4279e5acabcc99733"} Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.250056 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r5gk" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.252415 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" event={"ID":"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9","Type":"ContainerStarted","Data":"73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50"} Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.255803 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4f501faa-bbd6-44a2-b888-bbd39e59a94b","Type":"ContainerStarted","Data":"53a50c26a0ba0c3a6b04361581c2b5dcdc3cb81f4e4866dc3b30fe6d16b24fe0"} Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.266803 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.270792 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ldfsg"] Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.287585 5110 scope.go:117] "RemoveContainer" containerID="7032170822bf4a1670911d04b43bffe637db3798f228f153b6b4622ea1c7bb15" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.311973 5110 scope.go:117] "RemoveContainer" containerID="f7a779482691fbd290c2f2885bdd9ad8049e37869b33a02b3c115901ea8123c5" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.327435 5110 scope.go:117] "RemoveContainer" containerID="b78bc9346a3d22df2d3927b657730c01f5a8df684a9c1823ca1d96fc854706a6" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.349440 5110 scope.go:117] "RemoveContainer" containerID="867b402fac2351435b07b2ba3b048d731a10cca616a1c7dcdcb965af6ec28d20" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.382867 5110 scope.go:117] "RemoveContainer" containerID="c1dbe5229f434468ca1d8cd51e2bb610e3374a508574d476e66e56ffb9ef524e" Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.475454 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4cxlz" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="registry-server" containerID="cri-o://033c3b7f9bcc112e06995ddb2ca4204a9fe4c7c5bc33be29a27ba60c4b680b8a" gracePeriod=2 Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.476427 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.476451 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8r5gk"] Jan 30 00:15:01 crc kubenswrapper[5110]: I0130 00:15:01.524513 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=3.524498913 podStartE2EDuration="3.524498913s" podCreationTimestamp="2026-01-30 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:01.522915939 +0000 UTC m=+163.481152068" watchObservedRunningTime="2026-01-30 00:15:01.524498913 +0000 UTC m=+163.482735042" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.264503 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" event={"ID":"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9","Type":"ContainerStarted","Data":"d5870c069a85fe8a829d64824439c0a55ef938f7d7e2ed45e361d7b35d347241"} Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.266761 5110 generic.go:358] "Generic (PLEG): container finished" podID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerID="033c3b7f9bcc112e06995ddb2ca4204a9fe4c7c5bc33be29a27ba60c4b680b8a" exitCode=0 Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.266833 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerDied","Data":"033c3b7f9bcc112e06995ddb2ca4204a9fe4c7c5bc33be29a27ba60c4b680b8a"} Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.268211 5110 generic.go:358] "Generic (PLEG): container finished" podID="4f501faa-bbd6-44a2-b888-bbd39e59a94b" containerID="53a50c26a0ba0c3a6b04361581c2b5dcdc3cb81f4e4866dc3b30fe6d16b24fe0" exitCode=0 Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.268412 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4f501faa-bbd6-44a2-b888-bbd39e59a94b","Type":"ContainerDied","Data":"53a50c26a0ba0c3a6b04361581c2b5dcdc3cb81f4e4866dc3b30fe6d16b24fe0"} Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.771771 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.772043 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2f6cp" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="registry-server" containerID="cri-o://907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447" gracePeriod=2 Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.815924 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.879853 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" path="/var/lib/kubelet/pods/7b966975-ecee-4596-bdc1-c92dbe87e93d/volumes" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.880525 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" path="/var/lib/kubelet/pods/9cc2128b-e711-46fb-8f8a-71fe2622af5d/volumes" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.938548 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content\") pod \"be22ec96-01a6-4653-bf07-8fe0a61baf24\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.938595 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2p8\" (UniqueName: \"kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8\") pod \"be22ec96-01a6-4653-bf07-8fe0a61baf24\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.938647 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities\") pod \"be22ec96-01a6-4653-bf07-8fe0a61baf24\" (UID: \"be22ec96-01a6-4653-bf07-8fe0a61baf24\") " Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.940696 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities" (OuterVolumeSpecName: "utilities") pod "be22ec96-01a6-4653-bf07-8fe0a61baf24" (UID: "be22ec96-01a6-4653-bf07-8fe0a61baf24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.946852 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8" (OuterVolumeSpecName: "kube-api-access-nq2p8") pod "be22ec96-01a6-4653-bf07-8fe0a61baf24" (UID: "be22ec96-01a6-4653-bf07-8fe0a61baf24"). InnerVolumeSpecName "kube-api-access-nq2p8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:02 crc kubenswrapper[5110]: I0130 00:15:02.952094 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be22ec96-01a6-4653-bf07-8fe0a61baf24" (UID: "be22ec96-01a6-4653-bf07-8fe0a61baf24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.040684 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.040932 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nq2p8\" (UniqueName: \"kubernetes.io/projected/be22ec96-01a6-4653-bf07-8fe0a61baf24-kube-api-access-nq2p8\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.040945 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be22ec96-01a6-4653-bf07-8fe0a61baf24-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.281056 5110 generic.go:358] "Generic (PLEG): container finished" podID="da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" containerID="d5870c069a85fe8a829d64824439c0a55ef938f7d7e2ed45e361d7b35d347241" exitCode=0 Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.281848 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" event={"ID":"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9","Type":"ContainerDied","Data":"d5870c069a85fe8a829d64824439c0a55ef938f7d7e2ed45e361d7b35d347241"} Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.284989 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4cxlz" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.292459 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4cxlz" event={"ID":"be22ec96-01a6-4653-bf07-8fe0a61baf24","Type":"ContainerDied","Data":"c63d7adb8f0f10103335f935cabdb40288bb59e5e02c0ee864666504e6110095"} Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.292610 5110 scope.go:117] "RemoveContainer" containerID="033c3b7f9bcc112e06995ddb2ca4204a9fe4c7c5bc33be29a27ba60c4b680b8a" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.321583 5110 scope.go:117] "RemoveContainer" containerID="dca70626a4a1521d5b0ab73b2bccbfd6ca4c73db068edc29ba789843dfd8a15c" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.350980 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.355252 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4cxlz"] Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.374588 5110 scope.go:117] "RemoveContainer" containerID="755b3e7f7521811c707410a99d60ebdea6be3ad4440600825acb51c1507f5a16" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.630114 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.764874 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access\") pod \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.765013 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir\") pod \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\" (UID: \"4f501faa-bbd6-44a2-b888-bbd39e59a94b\") " Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.765128 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4f501faa-bbd6-44a2-b888-bbd39e59a94b" (UID: "4f501faa-bbd6-44a2-b888-bbd39e59a94b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.765313 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.773475 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f501faa-bbd6-44a2-b888-bbd39e59a94b" (UID: "4f501faa-bbd6-44a2-b888-bbd39e59a94b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.794958 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.866471 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities\") pod \"02c56d75-9b83-41cf-8126-774743923b26\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.866649 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content\") pod \"02c56d75-9b83-41cf-8126-774743923b26\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.866741 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8tt\" (UniqueName: \"kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt\") pod \"02c56d75-9b83-41cf-8126-774743923b26\" (UID: \"02c56d75-9b83-41cf-8126-774743923b26\") " Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.867731 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f501faa-bbd6-44a2-b888-bbd39e59a94b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.867755 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities" (OuterVolumeSpecName: "utilities") pod "02c56d75-9b83-41cf-8126-774743923b26" (UID: "02c56d75-9b83-41cf-8126-774743923b26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.870154 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt" (OuterVolumeSpecName: "kube-api-access-gd8tt") pod "02c56d75-9b83-41cf-8126-774743923b26" (UID: "02c56d75-9b83-41cf-8126-774743923b26"). InnerVolumeSpecName "kube-api-access-gd8tt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.969681 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gd8tt\" (UniqueName: \"kubernetes.io/projected/02c56d75-9b83-41cf-8126-774743923b26-kube-api-access-gd8tt\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.970491 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5110]: I0130 00:15:03.974032 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c56d75-9b83-41cf-8126-774743923b26" (UID: "02c56d75-9b83-41cf-8126-774743923b26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.072171 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c56d75-9b83-41cf-8126-774743923b26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150155 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150791 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150809 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150820 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150827 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150840 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150847 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150860 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f501faa-bbd6-44a2-b888-bbd39e59a94b" containerName="pruner" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150866 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f501faa-bbd6-44a2-b888-bbd39e59a94b" containerName="pruner" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150874 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150879 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150886 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150890 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150900 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150905 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150912 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150921 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150927 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150933 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150943 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150948 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="extract-utilities" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150954 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150959 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150973 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150978 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150984 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.150989 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="extract-content" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.151074 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f501faa-bbd6-44a2-b888-bbd39e59a94b" containerName="pruner" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.151083 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7b966975-ecee-4596-bdc1-c92dbe87e93d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.151093 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9cc2128b-e711-46fb-8f8a-71fe2622af5d" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.151099 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="02c56d75-9b83-41cf-8126-774743923b26" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.151106 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" containerName="registry-server" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.175271 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.175457 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.274542 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.274584 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.274621 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.294842 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.294856 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4f501faa-bbd6-44a2-b888-bbd39e59a94b","Type":"ContainerDied","Data":"5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069"} Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.294924 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aeffde531df12389923edafdd1dc068bbbabe6d7f95ff0ceadc1c013dba2069" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.297495 5110 generic.go:358] "Generic (PLEG): container finished" podID="02c56d75-9b83-41cf-8126-774743923b26" containerID="907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447" exitCode=0 Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.297866 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f6cp" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.299495 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerDied","Data":"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447"} Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.299560 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f6cp" event={"ID":"02c56d75-9b83-41cf-8126-774743923b26","Type":"ContainerDied","Data":"989c5e9e22814338a65ca7e4404590eb04ac7f9847a7f90b33569f10a0e0cf68"} Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.299590 5110 scope.go:117] "RemoveContainer" containerID="907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.328067 5110 scope.go:117] "RemoveContainer" containerID="e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.335407 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.337843 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2f6cp"] Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.352175 5110 scope.go:117] "RemoveContainer" containerID="ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.369516 5110 scope.go:117] "RemoveContainer" containerID="907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447" Jan 30 00:15:04 crc kubenswrapper[5110]: E0130 00:15:04.369949 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447\": container with ID starting with 907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447 not found: ID does not exist" containerID="907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.369981 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447"} err="failed to get container status \"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447\": rpc error: code = NotFound desc = could not find container \"907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447\": container with ID starting with 907b2eead8a0684465ccf2ee320688b22e911460c86a42b9a2d278545314b447 not found: ID does not exist" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.370015 5110 scope.go:117] "RemoveContainer" containerID="e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece" Jan 30 00:15:04 crc kubenswrapper[5110]: E0130 00:15:04.370222 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece\": container with ID starting with e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece not found: ID does not exist" containerID="e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.370239 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece"} err="failed to get container status \"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece\": rpc error: code = NotFound desc = could not find container \"e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece\": container with ID starting with e0fa459efd943234cdf2a238efc3d47294fa6564643992644048689ec140fece not found: ID does not exist" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.370251 5110 scope.go:117] "RemoveContainer" containerID="ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6" Jan 30 00:15:04 crc kubenswrapper[5110]: E0130 00:15:04.370428 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6\": container with ID starting with ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6 not found: ID does not exist" containerID="ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.370451 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6"} err="failed to get container status \"ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6\": rpc error: code = NotFound desc = could not find container \"ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6\": container with ID starting with ad13aa7b72f9598f31ff89d7f31e60cf8fb4af7b50f9734948a9c4e3d0d00af6 not found: ID does not exist" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.376597 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.376634 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.376671 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.376779 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.376792 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.397187 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access\") pod \"installer-12-crc\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.490581 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.575233 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.687478 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqnxp\" (UniqueName: \"kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp\") pod \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.687938 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume\") pod \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.688081 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume\") pod \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\" (UID: \"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9\") " Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.689024 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume" (OuterVolumeSpecName: "config-volume") pod "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" (UID: "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.694540 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" (UID: "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.694562 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp" (OuterVolumeSpecName: "kube-api-access-tqnxp") pod "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" (UID: "da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9"). InnerVolumeSpecName "kube-api-access-tqnxp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.790081 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.790137 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tqnxp\" (UniqueName: \"kubernetes.io/projected/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-kube-api-access-tqnxp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.790148 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.880060 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c56d75-9b83-41cf-8126-774743923b26" path="/var/lib/kubelet/pods/02c56d75-9b83-41cf-8126-774743923b26/volumes" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.881009 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be22ec96-01a6-4653-bf07-8fe0a61baf24" path="/var/lib/kubelet/pods/be22ec96-01a6-4653-bf07-8fe0a61baf24/volumes" Jan 30 00:15:04 crc kubenswrapper[5110]: I0130 00:15:04.954136 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:15:05 crc kubenswrapper[5110]: I0130 00:15:05.305486 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d47f53cb-cf84-4438-baf7-01f0a9095817","Type":"ContainerStarted","Data":"b55682298b0680f35afd6852b03524013a567321e130a0e5fec9c642acd9e75a"} Jan 30 00:15:05 crc kubenswrapper[5110]: I0130 00:15:05.307395 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" event={"ID":"da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9","Type":"ContainerDied","Data":"73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50"} Jan 30 00:15:05 crc kubenswrapper[5110]: I0130 00:15:05.307425 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73406c4dc8519c9883439dfd2a47fc1db2061e75a31525a7375b7864961cff50" Jan 30 00:15:05 crc kubenswrapper[5110]: I0130 00:15:05.307517 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-pg8q9" Jan 30 00:15:06 crc kubenswrapper[5110]: I0130 00:15:06.315183 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d47f53cb-cf84-4438-baf7-01f0a9095817","Type":"ContainerStarted","Data":"148ede658a00a1196262676c2597d1d67fcceba5ecfdd9d143b1373d72a1189a"} Jan 30 00:15:06 crc kubenswrapper[5110]: I0130 00:15:06.334863 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.334846252 podStartE2EDuration="2.334846252s" podCreationTimestamp="2026-01-30 00:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:06.330847789 +0000 UTC m=+168.289083918" watchObservedRunningTime="2026-01-30 00:15:06.334846252 +0000 UTC m=+168.293082381" Jan 30 00:15:09 crc kubenswrapper[5110]: I0130 00:15:09.138975 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.380321 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" containerID="cri-o://3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf" gracePeriod=15 Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.496004 5110 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-vwh7r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.496447 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.923037 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983099 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8577754547-zjbpw"] Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983803 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" containerName="collect-profiles" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983827 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" containerName="collect-profiles" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983870 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983881 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.983985 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="da2d51e1-63b1-49fe-998b-c1a5b4dc8fd9" containerName="collect-profiles" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.984007 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerName="oauth-openshift" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.991572 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:24 crc kubenswrapper[5110]: I0130 00:15:24.995120 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8577754547-zjbpw"] Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011425 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011575 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011635 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r89z\" (UniqueName: \"kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011674 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011712 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011796 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011834 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011887 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.011947 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012005 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012056 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012083 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012125 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012172 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error\") pod \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\" (UID: \"0c2ae4ef-cf7d-4c77-9892-00d84584bed1\") " Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012354 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-policies\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012390 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012422 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012448 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012478 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-login\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012512 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012581 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012620 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012647 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-session\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012724 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012771 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-error\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012810 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-dir\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012841 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.012868 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhvx\" (UniqueName: \"kubernetes.io/projected/bcd5336d-9ea6-4035-af3f-378f103d8a0f-kube-api-access-vvhvx\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.016068 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.016302 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.019069 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.019195 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.025933 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z" (OuterVolumeSpecName: "kube-api-access-6r89z") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "kube-api-access-6r89z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.026381 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.028795 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.032422 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.033375 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.033702 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.037682 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.038307 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.038682 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.038849 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0c2ae4ef-cf7d-4c77-9892-00d84584bed1" (UID: "0c2ae4ef-cf7d-4c77-9892-00d84584bed1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114292 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-policies\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114446 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114490 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114775 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-login\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.114907 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115092 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115216 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115265 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-session\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115451 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115552 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-error\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115630 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-dir\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115677 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115716 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvhvx\" (UniqueName: \"kubernetes.io/projected/bcd5336d-9ea6-4035-af3f-378f103d8a0f-kube-api-access-vvhvx\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115765 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115832 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115862 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115894 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115922 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115950 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115977 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115997 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116016 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116039 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116059 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6r89z\" (UniqueName: \"kubernetes.io/projected/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-kube-api-access-6r89z\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116078 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116184 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116212 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116238 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2ae4ef-cf7d-4c77-9892-00d84584bed1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.115629 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-policies\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.117124 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd5336d-9ea6-4035-af3f-378f103d8a0f-audit-dir\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.119402 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.116995 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.119889 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.120513 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-error\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.120569 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-login\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.120960 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.121433 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.122793 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-session\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.123663 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.126725 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd5336d-9ea6-4035-af3f-378f103d8a0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.136392 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvhvx\" (UniqueName: \"kubernetes.io/projected/bcd5336d-9ea6-4035-af3f-378f103d8a0f-kube-api-access-vvhvx\") pod \"oauth-openshift-8577754547-zjbpw\" (UID: \"bcd5336d-9ea6-4035-af3f-378f103d8a0f\") " pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.316506 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.474772 5110 generic.go:358] "Generic (PLEG): container finished" podID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" containerID="3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf" exitCode=0 Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.474873 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" event={"ID":"0c2ae4ef-cf7d-4c77-9892-00d84584bed1","Type":"ContainerDied","Data":"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf"} Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.474965 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.475001 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-vwh7r" event={"ID":"0c2ae4ef-cf7d-4c77-9892-00d84584bed1","Type":"ContainerDied","Data":"109b5f8b4aa138089f4e0270d60c921292b987f50dac55c26df4fdcb711f5443"} Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.475031 5110 scope.go:117] "RemoveContainer" containerID="3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.518062 5110 scope.go:117] "RemoveContainer" containerID="3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf" Jan 30 00:15:25 crc kubenswrapper[5110]: E0130 00:15:25.518847 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf\": container with ID starting with 3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf not found: ID does not exist" containerID="3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.518906 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf"} err="failed to get container status \"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf\": rpc error: code = NotFound desc = could not find container \"3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf\": container with ID starting with 3b992bc75a90869af333ece76a5d78c9921a872cfc44a8031fb5a04f9bd979cf not found: ID does not exist" Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.582530 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.585573 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-vwh7r"] Jan 30 00:15:25 crc kubenswrapper[5110]: I0130 00:15:25.912530 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8577754547-zjbpw"] Jan 30 00:15:25 crc kubenswrapper[5110]: W0130 00:15:25.924353 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcd5336d_9ea6_4035_af3f_378f103d8a0f.slice/crio-e6a9e30762551456edf99dd69687d8b988f45d0be401a3f6903fe52062764c81 WatchSource:0}: Error finding container e6a9e30762551456edf99dd69687d8b988f45d0be401a3f6903fe52062764c81: Status 404 returned error can't find the container with id e6a9e30762551456edf99dd69687d8b988f45d0be401a3f6903fe52062764c81 Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.483872 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" event={"ID":"bcd5336d-9ea6-4035-af3f-378f103d8a0f","Type":"ContainerStarted","Data":"06fc2bdf0004bea6bf39a02f1b29f19dc868dde9bdee8f8c59562cb9fc31a21a"} Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.483918 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" event={"ID":"bcd5336d-9ea6-4035-af3f-378f103d8a0f","Type":"ContainerStarted","Data":"e6a9e30762551456edf99dd69687d8b988f45d0be401a3f6903fe52062764c81"} Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.484351 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.515812 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" podStartSLOduration=27.515793589 podStartE2EDuration="27.515793589s" podCreationTimestamp="2026-01-30 00:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:26.513004121 +0000 UTC m=+188.471240260" watchObservedRunningTime="2026-01-30 00:15:26.515793589 +0000 UTC m=+188.474029718" Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.882162 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2ae4ef-cf7d-4c77-9892-00d84584bed1" path="/var/lib/kubelet/pods/0c2ae4ef-cf7d-4c77-9892-00d84584bed1/volumes" Jan 30 00:15:26 crc kubenswrapper[5110]: I0130 00:15:26.922032 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8577754547-zjbpw" Jan 30 00:15:37 crc kubenswrapper[5110]: I0130 00:15:37.891831 5110 ???:1] "http: TLS handshake error from 192.168.126.11:53822: no serving certificate available for the kubelet" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.361035 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.374961 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.375231 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.375540 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376515 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a" gracePeriod=15 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376626 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e" gracePeriod=15 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376612 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33" gracePeriod=15 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376514 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53" gracePeriod=15 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376665 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4" gracePeriod=15 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376914 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.376963 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377012 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377030 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377050 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377067 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377096 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377113 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377159 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377173 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377191 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377203 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377216 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377226 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377246 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377258 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377273 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377284 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377499 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377522 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377540 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377562 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377575 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377589 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377605 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377621 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377831 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.377848 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.378068 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.383872 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.425917 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.426458 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.426755 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.426938 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.427124 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.427280 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.427494 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.427755 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.428388 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.428595 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.454107 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.529791 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.529869 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.529920 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.529933 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.529964 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530016 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530072 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530085 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530141 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530179 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530176 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530211 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530243 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530242 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530488 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530561 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.530854 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.531007 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.531228 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.532184 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.623435 5110 generic.go:358] "Generic (PLEG): container finished" podID="d47f53cb-cf84-4438-baf7-01f0a9095817" containerID="148ede658a00a1196262676c2597d1d67fcceba5ecfdd9d143b1373d72a1189a" exitCode=0 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.623540 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d47f53cb-cf84-4438-baf7-01f0a9095817","Type":"ContainerDied","Data":"148ede658a00a1196262676c2597d1d67fcceba5ecfdd9d143b1373d72a1189a"} Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.625794 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.627792 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.629560 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.631121 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4" exitCode=0 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.631172 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e" exitCode=0 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.631192 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33" exitCode=0 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.631211 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53" exitCode=2 Jan 30 00:15:43 crc kubenswrapper[5110]: I0130 00:15:43.631323 5110 scope.go:117] "RemoveContainer" containerID="49ff3363659f11bf0bf61f4cbfa9ace0cb6a006637fa973b44383bd756fc639f" Jan 30 00:15:44 crc kubenswrapper[5110]: I0130 00:15:44.644037 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.015459 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.016388 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066364 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access\") pod \"d47f53cb-cf84-4438-baf7-01f0a9095817\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066474 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock\") pod \"d47f53cb-cf84-4438-baf7-01f0a9095817\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066507 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir\") pod \"d47f53cb-cf84-4438-baf7-01f0a9095817\" (UID: \"d47f53cb-cf84-4438-baf7-01f0a9095817\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066564 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock" (OuterVolumeSpecName: "var-lock") pod "d47f53cb-cf84-4438-baf7-01f0a9095817" (UID: "d47f53cb-cf84-4438-baf7-01f0a9095817"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066664 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d47f53cb-cf84-4438-baf7-01f0a9095817" (UID: "d47f53cb-cf84-4438-baf7-01f0a9095817"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066976 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.066996 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d47f53cb-cf84-4438-baf7-01f0a9095817-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.076486 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d47f53cb-cf84-4438-baf7-01f0a9095817" (UID: "d47f53cb-cf84-4438-baf7-01f0a9095817"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.169289 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d47f53cb-cf84-4438-baf7-01f0a9095817-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.653854 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.653839 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d47f53cb-cf84-4438-baf7-01f0a9095817","Type":"ContainerDied","Data":"b55682298b0680f35afd6852b03524013a567321e130a0e5fec9c642acd9e75a"} Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.654314 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b55682298b0680f35afd6852b03524013a567321e130a0e5fec9c642acd9e75a" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.667119 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.867667 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.868642 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.869720 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.870646 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.981749 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982013 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982067 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982130 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982194 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982203 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982301 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.982247 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.983291 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.983466 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.983483 5110 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.983493 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.983502 5110 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:45 crc kubenswrapper[5110]: I0130 00:15:45.987199 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.085249 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.674928 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.677250 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a" exitCode=0 Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.677426 5110 scope.go:117] "RemoveContainer" containerID="35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.678183 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.703941 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.704589 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.723327 5110 scope.go:117] "RemoveContainer" containerID="2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.742271 5110 scope.go:117] "RemoveContainer" containerID="bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.758353 5110 scope.go:117] "RemoveContainer" containerID="bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.772560 5110 scope.go:117] "RemoveContainer" containerID="e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.797892 5110 scope.go:117] "RemoveContainer" containerID="f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.863806 5110 scope.go:117] "RemoveContainer" containerID="35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.866573 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4\": container with ID starting with 35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4 not found: ID does not exist" containerID="35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.866632 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4"} err="failed to get container status \"35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4\": rpc error: code = NotFound desc = could not find container \"35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4\": container with ID starting with 35f8d13fb87a272b1eb66fdb228e2bb7bcf556b39e75e82b49668bf0a4579bd4 not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.866660 5110 scope.go:117] "RemoveContainer" containerID="2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.867408 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\": container with ID starting with 2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e not found: ID does not exist" containerID="2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.867485 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e"} err="failed to get container status \"2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\": rpc error: code = NotFound desc = could not find container \"2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e\": container with ID starting with 2c9281518f07e610647992536d9021d902cccd1ed4c415b49e1ed55af31f768e not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.867531 5110 scope.go:117] "RemoveContainer" containerID="bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.868390 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\": container with ID starting with bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33 not found: ID does not exist" containerID="bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.868506 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33"} err="failed to get container status \"bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\": rpc error: code = NotFound desc = could not find container \"bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33\": container with ID starting with bd61ae0b517c762c54f451e88746365229e6059529115357d40c85bd00c63b33 not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.868541 5110 scope.go:117] "RemoveContainer" containerID="bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.868966 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\": container with ID starting with bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53 not found: ID does not exist" containerID="bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.869029 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53"} err="failed to get container status \"bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\": rpc error: code = NotFound desc = could not find container \"bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53\": container with ID starting with bc3c2b3f8c32a0c476bf614aa4c31c64449d31edead9fc66c85a5abeaac32d53 not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.869051 5110 scope.go:117] "RemoveContainer" containerID="e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.869390 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\": container with ID starting with e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a not found: ID does not exist" containerID="e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.869425 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a"} err="failed to get container status \"e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\": rpc error: code = NotFound desc = could not find container \"e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a\": container with ID starting with e43438d6f078d5aec5021fe9950b54dd29193e131c7d2b7e0f3b3351ebb7c69a not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.869446 5110 scope.go:117] "RemoveContainer" containerID="f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f" Jan 30 00:15:46 crc kubenswrapper[5110]: E0130 00:15:46.869678 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\": container with ID starting with f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f not found: ID does not exist" containerID="f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.869702 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f"} err="failed to get container status \"f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\": rpc error: code = NotFound desc = could not find container \"f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f\": container with ID starting with f3d54298838ddc83e342d2c533f656a01ce5ab5308babee1f131310f6fda8d8f not found: ID does not exist" Jan 30 00:15:46 crc kubenswrapper[5110]: I0130 00:15:46.885689 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 30 00:15:48 crc kubenswrapper[5110]: E0130 00:15:48.459869 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:48 crc kubenswrapper[5110]: I0130 00:15:48.460652 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:48 crc kubenswrapper[5110]: E0130 00:15:48.487759 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f5a033a2be06e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:15:48.48725003 +0000 UTC m=+210.445486149,LastTimestamp:2026-01-30 00:15:48.48725003 +0000 UTC m=+210.445486149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:15:48 crc kubenswrapper[5110]: I0130 00:15:48.698473 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"ccc378184af4344ac842b7b6f3455b07ebfbf2affebe06930a814e13b844f99b"} Jan 30 00:15:48 crc kubenswrapper[5110]: I0130 00:15:48.875907 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:49 crc kubenswrapper[5110]: I0130 00:15:49.707237 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"636bd6c44507a61ec3f4791ea135ea923df114074223893dc0886c9d329b710c"} Jan 30 00:15:49 crc kubenswrapper[5110]: I0130 00:15:49.707683 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:49 crc kubenswrapper[5110]: E0130 00:15:49.708221 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:49 crc kubenswrapper[5110]: I0130 00:15:49.708399 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:50 crc kubenswrapper[5110]: I0130 00:15:50.715846 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:50 crc kubenswrapper[5110]: E0130 00:15:50.717534 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.919180 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.919641 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.920050 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.920489 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.921179 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:52 crc kubenswrapper[5110]: I0130 00:15:52.921255 5110 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 00:15:52 crc kubenswrapper[5110]: E0130 00:15:52.921898 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Jan 30 00:15:53 crc kubenswrapper[5110]: E0130 00:15:53.123030 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Jan 30 00:15:53 crc kubenswrapper[5110]: E0130 00:15:53.524284 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Jan 30 00:15:54 crc kubenswrapper[5110]: E0130 00:15:54.325169 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Jan 30 00:15:55 crc kubenswrapper[5110]: E0130 00:15:55.926281 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Jan 30 00:15:56 crc kubenswrapper[5110]: I0130 00:15:56.871613 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:56 crc kubenswrapper[5110]: I0130 00:15:56.879629 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:56 crc kubenswrapper[5110]: I0130 00:15:56.898373 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:15:56 crc kubenswrapper[5110]: I0130 00:15:56.898435 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:15:56 crc kubenswrapper[5110]: E0130 00:15:56.899185 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:56 crc kubenswrapper[5110]: I0130 00:15:56.899569 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:56 crc kubenswrapper[5110]: W0130 00:15:56.934058 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-9ef0ebf45cf6d38eeedf289ae9c8cbcb61e4857a07e5f13dd07d5425d4c2b60a WatchSource:0}: Error finding container 9ef0ebf45cf6d38eeedf289ae9c8cbcb61e4857a07e5f13dd07d5425d4c2b60a: Status 404 returned error can't find the container with id 9ef0ebf45cf6d38eeedf289ae9c8cbcb61e4857a07e5f13dd07d5425d4c2b60a Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.776193 5110 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="192198202dda4cbfb687ab03435f9c9e4aeb78e485073017a364a9442553ef41" exitCode=0 Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.776369 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"192198202dda4cbfb687ab03435f9c9e4aeb78e485073017a364a9442553ef41"} Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.776695 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"9ef0ebf45cf6d38eeedf289ae9c8cbcb61e4857a07e5f13dd07d5425d4c2b60a"} Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.777239 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.777269 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:15:57 crc kubenswrapper[5110]: E0130 00:15:57.778111 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:57 crc kubenswrapper[5110]: I0130 00:15:57.778182 5110 status_manager.go:895] "Failed to get status for pod" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Jan 30 00:15:58 crc kubenswrapper[5110]: E0130 00:15:58.035821 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f5a033a2be06e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:15:48.48725003 +0000 UTC m=+210.445486149,LastTimestamp:2026-01-30 00:15:48.48725003 +0000 UTC m=+210.445486149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.786215 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5de6bc47c505357d67caa022609c70e41478b68ad0be9e318a75f51e2a48c052"} Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.786714 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8b6e913ba088566c45ced03a9ff27dc7f91f5805f96f875e3fb5e15158566316"} Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.788924 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.788994 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695" exitCode=1 Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.789049 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695"} Jan 30 00:15:58 crc kubenswrapper[5110]: I0130 00:15:58.789837 5110 scope.go:117] "RemoveContainer" containerID="c682580eee20b360fa6e8fd2dd27c6f8bd427e8aa702b56b7ba9c6bce01c5695" Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.798313 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.799008 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f4985de02d699a2e2428c9b31efae80f1d5242f266cb89f3a4a89b67e48816de"} Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.803738 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"81190c447ec27f65eb0d76f2d451c5213ffc15bb96ea7f38259ffc60762988dc"} Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.803782 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"88fe5ac5f6ef562ae1e8d23a6ccd9f07a902f21b86d09b3ebf22a2acd92a058a"} Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.803801 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d8ce246accf89d296e6fd4deb1afdab59dd9f9b57a080be2e9aa0b8ae7afd5e5"} Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.803942 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.804053 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:15:59 crc kubenswrapper[5110]: I0130 00:15:59.804087 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:16:01 crc kubenswrapper[5110]: I0130 00:16:01.651539 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:16:01 crc kubenswrapper[5110]: I0130 00:16:01.900595 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:01 crc kubenswrapper[5110]: I0130 00:16:01.900656 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:01 crc kubenswrapper[5110]: I0130 00:16:01.906828 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.815842 5110 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.816235 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.839433 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.839654 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.843176 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:04 crc kubenswrapper[5110]: I0130 00:16:04.846165 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9fe3e1d1-8ee6-4b15-869a-f98feb15fd4b" Jan 30 00:16:05 crc kubenswrapper[5110]: I0130 00:16:05.845823 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:16:05 crc kubenswrapper[5110]: I0130 00:16:05.846218 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95c87c5f-e016-42c1-8e6a-36e478fe2592" Jan 30 00:16:08 crc kubenswrapper[5110]: I0130 00:16:08.143777 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:16:08 crc kubenswrapper[5110]: I0130 00:16:08.151711 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:16:08 crc kubenswrapper[5110]: I0130 00:16:08.891380 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9fe3e1d1-8ee6-4b15-869a-f98feb15fd4b" Jan 30 00:16:09 crc kubenswrapper[5110]: I0130 00:16:09.210529 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:16:09 crc kubenswrapper[5110]: I0130 00:16:09.211504 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:16:14 crc kubenswrapper[5110]: I0130 00:16:14.566732 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:16:14 crc kubenswrapper[5110]: I0130 00:16:14.715717 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:14 crc kubenswrapper[5110]: I0130 00:16:14.829555 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.243685 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.575510 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.622967 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.655776 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.820608 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:16:15 crc kubenswrapper[5110]: I0130 00:16:15.982864 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.156307 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.168657 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.511790 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.563006 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.584508 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.675628 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.859105 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:16:16 crc kubenswrapper[5110]: I0130 00:16:16.876302 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.196173 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.262202 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.438224 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.690756 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.743424 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.905512 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.934319 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.972417 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:16:17 crc kubenswrapper[5110]: I0130 00:16:17.996607 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.087568 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.090687 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.096153 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.116273 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.293217 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.375958 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.406598 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.445083 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.531900 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.542638 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.629030 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.783885 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.806137 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.856411 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.861768 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.888840 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.894085 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:16:18 crc kubenswrapper[5110]: I0130 00:16:18.964979 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.013991 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.024813 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.129314 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.184278 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.184902 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.187406 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.264468 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.269777 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.356652 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.358751 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.365983 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.412320 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.545171 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.613711 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.746569 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.759740 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.789438 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.918645 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:16:19 crc kubenswrapper[5110]: I0130 00:16:19.962740 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.001182 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.019436 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.082095 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.152526 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.161624 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.241045 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.250175 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.383926 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.531644 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.560213 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.634407 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.644369 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.683641 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.687141 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.741618 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.748944 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.814960 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.878773 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.931485 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.950033 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:16:20 crc kubenswrapper[5110]: I0130 00:16:20.961710 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.000428 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.007423 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.032717 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.066134 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.129882 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.186428 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.195639 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.227202 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.316031 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.322037 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.518515 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.540366 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.540597 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.635142 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.729582 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.773598 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.777312 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.781579 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.782045 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:16:21 crc kubenswrapper[5110]: I0130 00:16:21.887788 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.139162 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.215906 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.220472 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.238503 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.248162 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.255478 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.359273 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.364599 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.382145 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.401892 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.415041 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.448096 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.481199 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.555826 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.681397 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.696690 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.706895 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.706995 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.828496 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.880894 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.961392 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:16:22 crc kubenswrapper[5110]: I0130 00:16:22.980383 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.034735 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.280645 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.304907 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.344418 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.401403 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.451701 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.593008 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.688173 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.797217 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:16:23 crc kubenswrapper[5110]: I0130 00:16:23.863492 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.082076 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.157383 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.208144 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.231778 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.248458 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.286006 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.295213 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.483538 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.500492 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.562955 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.624330 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.645028 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.649255 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.679457 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.734117 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.775305 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.776203 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.799804 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.821760 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:16:24 crc kubenswrapper[5110]: I0130 00:16:24.846027 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.000913 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.141612 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.164992 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.361497 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.373116 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.417528 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.435179 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.484985 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.516412 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.539674 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.654584 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.663758 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.704644 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.746029 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.751788 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.782006 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.908080 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:16:25 crc kubenswrapper[5110]: I0130 00:16:25.950727 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.020006 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.026511 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.085709 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.105935 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.127526 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.176407 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.192486 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.246018 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.272865 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.286703 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.308558 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.386432 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.467655 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.497456 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.511486 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.522432 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.567282 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.589097 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.606169 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.611518 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.624037 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.675957 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.723055 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.766376 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.792524 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.827529 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.895714 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.908541 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:16:26 crc kubenswrapper[5110]: I0130 00:16:26.986291 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.198049 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.200218 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.331224 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.412852 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.423850 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.512300 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.536592 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.537456 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.652026 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.666210 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.684970 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.721605 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.735602 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.796469 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.805650 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.805751 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.827614 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.870447 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.870423534 podStartE2EDuration="23.870423534s" podCreationTimestamp="2026-01-30 00:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:27.861428415 +0000 UTC m=+249.819664594" watchObservedRunningTime="2026-01-30 00:16:27.870423534 +0000 UTC m=+249.828659673" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.940729 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.944720 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:16:27 crc kubenswrapper[5110]: I0130 00:16:27.986152 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.114096 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.125449 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.195379 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.310778 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.367194 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.413313 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.436865 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.440245 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.458899 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.557153 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.689121 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.848799 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.850212 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.919750 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:16:28 crc kubenswrapper[5110]: I0130 00:16:28.997566 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.035248 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.203903 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.241045 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.395276 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.422169 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.427698 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.447952 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:16:29 crc kubenswrapper[5110]: I0130 00:16:29.660929 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.019263 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.219214 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.244629 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.425032 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.568314 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.647948 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.762163 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.810160 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.910634 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:16:30 crc kubenswrapper[5110]: I0130 00:16:30.954676 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:16:31 crc kubenswrapper[5110]: I0130 00:16:31.334043 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:16:31 crc kubenswrapper[5110]: I0130 00:16:31.393179 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:16:31 crc kubenswrapper[5110]: I0130 00:16:31.858854 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:16:32 crc kubenswrapper[5110]: I0130 00:16:32.538483 5110 ???:1] "http: TLS handshake error from 192.168.126.11:55442: no serving certificate available for the kubelet" Jan 30 00:16:33 crc kubenswrapper[5110]: I0130 00:16:33.362676 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:16:38 crc kubenswrapper[5110]: I0130 00:16:38.614489 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:16:38 crc kubenswrapper[5110]: I0130 00:16:38.615543 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://636bd6c44507a61ec3f4791ea135ea923df114074223893dc0886c9d329b710c" gracePeriod=5 Jan 30 00:16:38 crc kubenswrapper[5110]: I0130 00:16:38.879108 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:16:39 crc kubenswrapper[5110]: I0130 00:16:39.221321 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:16:39 crc kubenswrapper[5110]: I0130 00:16:39.221467 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.167715 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.168222 5110 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="636bd6c44507a61ec3f4791ea135ea923df114074223893dc0886c9d329b710c" exitCode=137 Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.230254 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.230411 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.232756 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.352861 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.352956 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353035 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353084 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353143 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353165 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353187 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353437 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353578 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353842 5110 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353873 5110 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353892 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.353909 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.364434 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.455437 5110 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:44 crc kubenswrapper[5110]: I0130 00:16:44.882697 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 30 00:16:45 crc kubenswrapper[5110]: I0130 00:16:45.179383 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:16:45 crc kubenswrapper[5110]: I0130 00:16:45.179690 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:16:45 crc kubenswrapper[5110]: I0130 00:16:45.179704 5110 scope.go:117] "RemoveContainer" containerID="636bd6c44507a61ec3f4791ea135ea923df114074223893dc0886c9d329b710c" Jan 30 00:16:45 crc kubenswrapper[5110]: I0130 00:16:45.184277 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:16:45 crc kubenswrapper[5110]: I0130 00:16:45.187442 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.188420 5110 generic.go:358] "Generic (PLEG): container finished" podID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerID="5c694b2e34bee137a9a92b625c41e7da7d318ea0fc9589befde5488183ef71b3" exitCode=0 Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.188917 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-kxkkt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.188505 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerDied","Data":"5c694b2e34bee137a9a92b625c41e7da7d318ea0fc9589befde5488183ef71b3"} Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.189017 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.189831 5110 scope.go:117] "RemoveContainer" containerID="5c694b2e34bee137a9a92b625c41e7da7d318ea0fc9589befde5488183ef71b3" Jan 30 00:16:46 crc kubenswrapper[5110]: I0130 00:16:46.191727 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:16:47 crc kubenswrapper[5110]: I0130 00:16:47.199617 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerStarted","Data":"fde233465eb471f43615b4c8cd1939eb99565bf05bda67e97fd0da4c8fefce5e"} Jan 30 00:16:47 crc kubenswrapper[5110]: I0130 00:16:47.202026 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:16:47 crc kubenswrapper[5110]: I0130 00:16:47.205592 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:16:59 crc kubenswrapper[5110]: I0130 00:16:59.849446 5110 ???:1] "http: TLS handshake error from 192.168.126.11:52826: no serving certificate available for the kubelet" Jan 30 00:17:07 crc kubenswrapper[5110]: I0130 00:17:07.837806 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:17:07 crc kubenswrapper[5110]: I0130 00:17:07.838895 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerName="controller-manager" containerID="cri-o://23b15306fc1c94d0cf2724bcf67908d3b4b7dda26cdee27e55c2badb19459ead" gracePeriod=30 Jan 30 00:17:07 crc kubenswrapper[5110]: I0130 00:17:07.850028 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:17:07 crc kubenswrapper[5110]: I0130 00:17:07.850448 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" podUID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" containerName="route-controller-manager" containerID="cri-o://bd28bc52969fbd17e20f4d3aa641ca142f8e769d7e5885ba5e08ac929f88bd83" gracePeriod=30 Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.092000 5110 generic.go:358] "Generic (PLEG): container finished" podID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" containerID="bd28bc52969fbd17e20f4d3aa641ca142f8e769d7e5885ba5e08ac929f88bd83" exitCode=0 Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.092234 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" event={"ID":"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67","Type":"ContainerDied","Data":"bd28bc52969fbd17e20f4d3aa641ca142f8e769d7e5885ba5e08ac929f88bd83"} Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.094349 5110 generic.go:358] "Generic (PLEG): container finished" podID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerID="23b15306fc1c94d0cf2724bcf67908d3b4b7dda26cdee27e55c2badb19459ead" exitCode=0 Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.094440 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" event={"ID":"65919fc8-e5a3-4a1b-9a55-59430b3a8394","Type":"ContainerDied","Data":"23b15306fc1c94d0cf2724bcf67908d3b4b7dda26cdee27e55c2badb19459ead"} Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.259642 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.259820 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.308000 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309729 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309758 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309800 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerName="controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309814 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerName="controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309843 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" containerName="route-controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309854 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" containerName="route-controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309895 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" containerName="installer" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.309903 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" containerName="installer" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.310388 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d47f53cb-cf84-4438-baf7-01f0a9095817" containerName="installer" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.310425 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" containerName="controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.310446 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.310459 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" containerName="route-controller-manager" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.336555 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.336762 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.349096 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.353875 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.355323 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394385 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394479 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394505 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca\") pod \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394523 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394542 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config\") pod \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394583 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394625 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394677 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq7vc\" (UniqueName: \"kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc\") pod \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394736 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8tv\" (UniqueName: \"kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv\") pod \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\" (UID: \"65919fc8-e5a3-4a1b-9a55-59430b3a8394\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394773 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert\") pod \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394819 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp\") pod \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\" (UID: \"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67\") " Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.394855 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp" (OuterVolumeSpecName: "tmp") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.395036 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/65919fc8-e5a3-4a1b-9a55-59430b3a8394-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.395579 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca" (OuterVolumeSpecName: "client-ca") pod "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" (UID: "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.395974 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config" (OuterVolumeSpecName: "config") pod "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" (UID: "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.396054 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca" (OuterVolumeSpecName: "client-ca") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.396533 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.396591 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp" (OuterVolumeSpecName: "tmp") pod "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" (UID: "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.396296 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config" (OuterVolumeSpecName: "config") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.402100 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv" (OuterVolumeSpecName: "kube-api-access-nr8tv") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "kube-api-access-nr8tv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.402246 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" (UID: "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.402634 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65919fc8-e5a3-4a1b-9a55-59430b3a8394" (UID: "65919fc8-e5a3-4a1b-9a55-59430b3a8394"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.407399 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc" (OuterVolumeSpecName: "kube-api-access-dq7vc") pod "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" (UID: "7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67"). InnerVolumeSpecName "kube-api-access-dq7vc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496025 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496092 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496117 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496142 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4448f\" (UniqueName: \"kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496367 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55gld\" (UniqueName: \"kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496440 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496518 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496609 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496637 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496658 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496839 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496864 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496905 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496920 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496934 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65919fc8-e5a3-4a1b-9a55-59430b3a8394-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496945 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65919fc8-e5a3-4a1b-9a55-59430b3a8394-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.496985 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dq7vc\" (UniqueName: \"kubernetes.io/projected/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-kube-api-access-dq7vc\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.497003 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nr8tv\" (UniqueName: \"kubernetes.io/projected/65919fc8-e5a3-4a1b-9a55-59430b3a8394-kube-api-access-nr8tv\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.497014 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.497026 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.598895 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55gld\" (UniqueName: \"kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.598978 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599004 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599203 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599348 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599383 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599407 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599503 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599566 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599674 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599690 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4448f\" (UniqueName: \"kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.599612 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.600436 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.600499 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.601118 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.601459 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.601483 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.604663 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.611941 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.616783 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4448f\" (UniqueName: \"kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f\") pod \"controller-manager-6d466d9dcb-q27rp\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.616929 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55gld\" (UniqueName: \"kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld\") pod \"route-controller-manager-54bd6fd6d8-9gxr7\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.661063 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.687114 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.886671 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:08 crc kubenswrapper[5110]: I0130 00:17:08.922749 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:08 crc kubenswrapper[5110]: W0130 00:17:08.925812 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05bab2f4_4cac_42bd_93d2_127358985699.slice/crio-5e430e0d246136296789464930d5e231529f978b27a9e4cabbd159979abf7b51 WatchSource:0}: Error finding container 5e430e0d246136296789464930d5e231529f978b27a9e4cabbd159979abf7b51: Status 404 returned error can't find the container with id 5e430e0d246136296789464930d5e231529f978b27a9e4cabbd159979abf7b51 Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.108524 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" event={"ID":"04a15431-129a-4865-9af8-1bab9441e2eb","Type":"ContainerStarted","Data":"24a300b79dd7739e6c871fef915a6c3ce67faeecf426613c59882d751c0374ad"} Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.108949 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.112023 5110 patch_prober.go:28] interesting pod/route-controller-manager-54bd6fd6d8-9gxr7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.112093 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.114456 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" event={"ID":"65919fc8-e5a3-4a1b-9a55-59430b3a8394","Type":"ContainerDied","Data":"acd24b9e39bc6639e86f4bf0e2c49ced473029f709fc8f956d5906d7c3207b0b"} Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.114561 5110 scope.go:117] "RemoveContainer" containerID="23b15306fc1c94d0cf2724bcf67908d3b4b7dda26cdee27e55c2badb19459ead" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.114912 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5p8zc" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.120929 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" event={"ID":"05bab2f4-4cac-42bd-93d2-127358985699","Type":"ContainerStarted","Data":"70bb5b6a8867dd4c1bb51ad01a99c2fc200119747e040c018bf08b62ae6094e1"} Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.120980 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" event={"ID":"05bab2f4-4cac-42bd-93d2-127358985699","Type":"ContainerStarted","Data":"5e430e0d246136296789464930d5e231529f978b27a9e4cabbd159979abf7b51"} Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.121523 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.122797 5110 patch_prober.go:28] interesting pod/controller-manager-6d466d9dcb-q27rp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.122870 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" podUID="05bab2f4-4cac-42bd-93d2-127358985699" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.131919 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" event={"ID":"7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67","Type":"ContainerDied","Data":"7e753ca107495b8df50e94edf887d343aefe842b81d6ad680f3e9e3ee82d1b91"} Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.132619 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.140037 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" podStartSLOduration=2.1400100220000002 podStartE2EDuration="2.140010022s" podCreationTimestamp="2026-01-30 00:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:09.128855199 +0000 UTC m=+291.087091368" watchObservedRunningTime="2026-01-30 00:17:09.140010022 +0000 UTC m=+291.098246151" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.159103 5110 scope.go:117] "RemoveContainer" containerID="bd28bc52969fbd17e20f4d3aa641ca142f8e769d7e5885ba5e08ac929f88bd83" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.160304 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.172422 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5p8zc"] Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.179435 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" podStartSLOduration=2.179402241 podStartE2EDuration="2.179402241s" podCreationTimestamp="2026-01-30 00:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:09.168743941 +0000 UTC m=+291.126980080" watchObservedRunningTime="2026-01-30 00:17:09.179402241 +0000 UTC m=+291.137638380" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.187523 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.192868 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jwv7h"] Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.210266 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.210770 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.210844 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.211734 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d"} pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:17:09 crc kubenswrapper[5110]: I0130 00:17:09.211848 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" containerID="cri-o://ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d" gracePeriod=600 Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.140267 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" event={"ID":"04a15431-129a-4865-9af8-1bab9441e2eb","Type":"ContainerStarted","Data":"16d9ce8f99bae1df2eed6ebc4dafc9a72f7be16b1489b914c4050f7860c52997"} Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.144282 5110 generic.go:358] "Generic (PLEG): container finished" podID="97dc714a-5d84-4c81-99ef-13067437fcad" containerID="ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d" exitCode=0 Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.144403 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerDied","Data":"ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d"} Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.144540 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3"} Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.148377 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.152525 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.884461 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65919fc8-e5a3-4a1b-9a55-59430b3a8394" path="/var/lib/kubelet/pods/65919fc8-e5a3-4a1b-9a55-59430b3a8394/volumes" Jan 30 00:17:10 crc kubenswrapper[5110]: I0130 00:17:10.885644 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67" path="/var/lib/kubelet/pods/7da52df2-bfb5-44ad-bd4a-1bd7d1e8df67/volumes" Jan 30 00:17:15 crc kubenswrapper[5110]: I0130 00:17:15.183171 5110 generic.go:358] "Generic (PLEG): container finished" podID="a070c3b8-7e87-4386-98d0-7ed3aaa53772" containerID="92ed3f2d1d5c3b6c9b77ffb3cacfa3c73dba13815c977931be648840e1aaf89e" exitCode=0 Jan 30 00:17:15 crc kubenswrapper[5110]: I0130 00:17:15.183299 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-r6lp4" event={"ID":"a070c3b8-7e87-4386-98d0-7ed3aaa53772","Type":"ContainerDied","Data":"92ed3f2d1d5c3b6c9b77ffb3cacfa3c73dba13815c977931be648840e1aaf89e"} Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.554534 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.631860 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca\") pod \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.631984 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdcm2\" (UniqueName: \"kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2\") pod \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\" (UID: \"a070c3b8-7e87-4386-98d0-7ed3aaa53772\") " Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.633099 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca" (OuterVolumeSpecName: "serviceca") pod "a070c3b8-7e87-4386-98d0-7ed3aaa53772" (UID: "a070c3b8-7e87-4386-98d0-7ed3aaa53772"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.648007 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2" (OuterVolumeSpecName: "kube-api-access-sdcm2") pod "a070c3b8-7e87-4386-98d0-7ed3aaa53772" (UID: "a070c3b8-7e87-4386-98d0-7ed3aaa53772"). InnerVolumeSpecName "kube-api-access-sdcm2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.734961 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sdcm2\" (UniqueName: \"kubernetes.io/projected/a070c3b8-7e87-4386-98d0-7ed3aaa53772-kube-api-access-sdcm2\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:16 crc kubenswrapper[5110]: I0130 00:17:16.735013 5110 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a070c3b8-7e87-4386-98d0-7ed3aaa53772-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:17 crc kubenswrapper[5110]: I0130 00:17:17.207769 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-r6lp4" event={"ID":"a070c3b8-7e87-4386-98d0-7ed3aaa53772","Type":"ContainerDied","Data":"a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24"} Jan 30 00:17:17 crc kubenswrapper[5110]: I0130 00:17:17.208254 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ec0838642dffe67cf899fb83d084e73c0b4350914f4c65227be3ffb35e8d24" Jan 30 00:17:17 crc kubenswrapper[5110]: I0130 00:17:17.207970 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-r6lp4" Jan 30 00:17:19 crc kubenswrapper[5110]: I0130 00:17:19.100513 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:17:19 crc kubenswrapper[5110]: I0130 00:17:19.105508 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.024611 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.025953 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" podUID="05bab2f4-4cac-42bd-93d2-127358985699" containerName="controller-manager" containerID="cri-o://70bb5b6a8867dd4c1bb51ad01a99c2fc200119747e040c018bf08b62ae6094e1" gracePeriod=30 Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.040723 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.041089 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" containerName="route-controller-manager" containerID="cri-o://16d9ce8f99bae1df2eed6ebc4dafc9a72f7be16b1489b914c4050f7860c52997" gracePeriod=30 Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.318699 5110 generic.go:358] "Generic (PLEG): container finished" podID="04a15431-129a-4865-9af8-1bab9441e2eb" containerID="16d9ce8f99bae1df2eed6ebc4dafc9a72f7be16b1489b914c4050f7860c52997" exitCode=0 Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.318998 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" event={"ID":"04a15431-129a-4865-9af8-1bab9441e2eb","Type":"ContainerDied","Data":"16d9ce8f99bae1df2eed6ebc4dafc9a72f7be16b1489b914c4050f7860c52997"} Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.321249 5110 generic.go:358] "Generic (PLEG): container finished" podID="05bab2f4-4cac-42bd-93d2-127358985699" containerID="70bb5b6a8867dd4c1bb51ad01a99c2fc200119747e040c018bf08b62ae6094e1" exitCode=0 Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.321801 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" event={"ID":"05bab2f4-4cac-42bd-93d2-127358985699","Type":"ContainerDied","Data":"70bb5b6a8867dd4c1bb51ad01a99c2fc200119747e040c018bf08b62ae6094e1"} Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.560702 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.603874 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604410 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a070c3b8-7e87-4386-98d0-7ed3aaa53772" containerName="image-pruner" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604429 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a070c3b8-7e87-4386-98d0-7ed3aaa53772" containerName="image-pruner" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604462 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" containerName="route-controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604469 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" containerName="route-controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604550 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a070c3b8-7e87-4386-98d0-7ed3aaa53772" containerName="image-pruner" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.604561 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" containerName="route-controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.611053 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618115 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp\") pod \"04a15431-129a-4865-9af8-1bab9441e2eb\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618169 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55gld\" (UniqueName: \"kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld\") pod \"04a15431-129a-4865-9af8-1bab9441e2eb\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618254 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config\") pod \"04a15431-129a-4865-9af8-1bab9441e2eb\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618301 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert\") pod \"04a15431-129a-4865-9af8-1bab9441e2eb\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618408 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca\") pod \"04a15431-129a-4865-9af8-1bab9441e2eb\" (UID: \"04a15431-129a-4865-9af8-1bab9441e2eb\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.618632 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp" (OuterVolumeSpecName: "tmp") pod "04a15431-129a-4865-9af8-1bab9441e2eb" (UID: "04a15431-129a-4865-9af8-1bab9441e2eb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.619192 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config" (OuterVolumeSpecName: "config") pod "04a15431-129a-4865-9af8-1bab9441e2eb" (UID: "04a15431-129a-4865-9af8-1bab9441e2eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.619374 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "04a15431-129a-4865-9af8-1bab9441e2eb" (UID: "04a15431-129a-4865-9af8-1bab9441e2eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.633608 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04a15431-129a-4865-9af8-1bab9441e2eb" (UID: "04a15431-129a-4865-9af8-1bab9441e2eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.642115 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.657712 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld" (OuterVolumeSpecName: "kube-api-access-55gld") pod "04a15431-129a-4865-9af8-1bab9441e2eb" (UID: "04a15431-129a-4865-9af8-1bab9441e2eb"). InnerVolumeSpecName "kube-api-access-55gld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.720359 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.720436 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f5t4\" (UniqueName: \"kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.720575 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.720781 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.720843 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.721076 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04a15431-129a-4865-9af8-1bab9441e2eb-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.721115 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55gld\" (UniqueName: \"kubernetes.io/projected/04a15431-129a-4865-9af8-1bab9441e2eb-kube-api-access-55gld\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.721134 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.721148 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a15431-129a-4865-9af8-1bab9441e2eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.721164 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a15431-129a-4865-9af8-1bab9441e2eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.736462 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.772273 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.773362 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05bab2f4-4cac-42bd-93d2-127358985699" containerName="controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.773393 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bab2f4-4cac-42bd-93d2-127358985699" containerName="controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.773526 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="05bab2f4-4cac-42bd-93d2-127358985699" containerName="controller-manager" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.781188 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.781392 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821669 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821784 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821834 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821914 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4448f\" (UniqueName: \"kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.821947 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca\") pod \"05bab2f4-4cac-42bd-93d2-127358985699\" (UID: \"05bab2f4-4cac-42bd-93d2-127358985699\") " Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822099 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822135 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822182 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822228 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6f5t4\" (UniqueName: \"kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822265 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822297 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822352 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp" (OuterVolumeSpecName: "tmp") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822358 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822431 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sc6\" (UniqueName: \"kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822459 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822488 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822523 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.822563 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05bab2f4-4cac-42bd-93d2-127358985699-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.823580 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.823653 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.824165 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca" (OuterVolumeSpecName: "client-ca") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.824209 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config" (OuterVolumeSpecName: "config") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.824221 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.824429 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.827714 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.829735 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f" (OuterVolumeSpecName: "kube-api-access-4448f") pod "05bab2f4-4cac-42bd-93d2-127358985699" (UID: "05bab2f4-4cac-42bd-93d2-127358985699"). InnerVolumeSpecName "kube-api-access-4448f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.831583 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.839098 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f5t4\" (UniqueName: \"kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4\") pod \"route-controller-manager-6d6ddfb8-l9t52\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.923928 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924022 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924079 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924151 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924195 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7sc6\" (UniqueName: \"kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924404 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05bab2f4-4cac-42bd-93d2-127358985699-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924427 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924448 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924467 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4448f\" (UniqueName: \"kubernetes.io/projected/05bab2f4-4cac-42bd-93d2-127358985699-kube-api-access-4448f\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.924488 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05bab2f4-4cac-42bd-93d2-127358985699-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.925390 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.925905 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.927307 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.928307 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.929201 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.942947 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7sc6\" (UniqueName: \"kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6\") pod \"controller-manager-585d4d46c8-xdf8r\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:31 crc kubenswrapper[5110]: I0130 00:17:31.949728 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.096748 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.304609 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.324205 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.336165 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" event={"ID":"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09","Type":"ContainerStarted","Data":"2254d3e99800435e452419ed078c7a41aabf27ad3438409301ebd8749f2210f3"} Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.340099 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.340086 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7" event={"ID":"04a15431-129a-4865-9af8-1bab9441e2eb","Type":"ContainerDied","Data":"24a300b79dd7739e6c871fef915a6c3ce67faeecf426613c59882d751c0374ad"} Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.340198 5110 scope.go:117] "RemoveContainer" containerID="16d9ce8f99bae1df2eed6ebc4dafc9a72f7be16b1489b914c4050f7860c52997" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.342726 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" event={"ID":"05bab2f4-4cac-42bd-93d2-127358985699","Type":"ContainerDied","Data":"5e430e0d246136296789464930d5e231529f978b27a9e4cabbd159979abf7b51"} Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.342882 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d466d9dcb-q27rp" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.381989 5110 scope.go:117] "RemoveContainer" containerID="70bb5b6a8867dd4c1bb51ad01a99c2fc200119747e040c018bf08b62ae6094e1" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.393705 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.408743 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d466d9dcb-q27rp"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.414222 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.420714 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd6fd6d8-9gxr7"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.617112 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.883067 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a15431-129a-4865-9af8-1bab9441e2eb" path="/var/lib/kubelet/pods/04a15431-129a-4865-9af8-1bab9441e2eb/volumes" Jan 30 00:17:32 crc kubenswrapper[5110]: I0130 00:17:32.883705 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bab2f4-4cac-42bd-93d2-127358985699" path="/var/lib/kubelet/pods/05bab2f4-4cac-42bd-93d2-127358985699/volumes" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.349524 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" event={"ID":"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09","Type":"ContainerStarted","Data":"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b"} Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.351434 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.352661 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" event={"ID":"8275f0ab-08a1-4f0f-a942-41d2fb053e75","Type":"ContainerStarted","Data":"463156cc05b552cd433d81bb7511512eb515505d092da09f42d53eb83de0d498"} Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.352691 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" event={"ID":"8275f0ab-08a1-4f0f-a942-41d2fb053e75","Type":"ContainerStarted","Data":"a131827916d9e96f2fd0a55175b1e895dec469740054152386eee8c461679574"} Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.353656 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.365689 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.378965 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" podStartSLOduration=2.378942162 podStartE2EDuration="2.378942162s" podCreationTimestamp="2026-01-30 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:33.376605373 +0000 UTC m=+315.334841522" watchObservedRunningTime="2026-01-30 00:17:33.378942162 +0000 UTC m=+315.337178291" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.400990 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" podStartSLOduration=2.40096381 podStartE2EDuration="2.40096381s" podCreationTimestamp="2026-01-30 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:33.399180325 +0000 UTC m=+315.357416474" watchObservedRunningTime="2026-01-30 00:17:33.40096381 +0000 UTC m=+315.359199949" Jan 30 00:17:33 crc kubenswrapper[5110]: I0130 00:17:33.564352 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.073206 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.075170 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" podUID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" containerName="controller-manager" containerID="cri-o://463156cc05b552cd433d81bb7511512eb515505d092da09f42d53eb83de0d498" gracePeriod=30 Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.082134 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.082555 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" podUID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" containerName="route-controller-manager" containerID="cri-o://aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b" gracePeriod=30 Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.708200 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.757857 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x"] Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.758525 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" containerName="route-controller-manager" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.758541 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" containerName="route-controller-manager" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.758663 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" containerName="route-controller-manager" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.763766 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.779837 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x"] Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.789033 5110 generic.go:358] "Generic (PLEG): container finished" podID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" containerID="aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b" exitCode=0 Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.789122 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.789158 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" event={"ID":"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09","Type":"ContainerDied","Data":"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b"} Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.789227 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52" event={"ID":"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09","Type":"ContainerDied","Data":"2254d3e99800435e452419ed078c7a41aabf27ad3438409301ebd8749f2210f3"} Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.789255 5110 scope.go:117] "RemoveContainer" containerID="aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.793519 5110 generic.go:358] "Generic (PLEG): container finished" podID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" containerID="463156cc05b552cd433d81bb7511512eb515505d092da09f42d53eb83de0d498" exitCode=0 Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.793629 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" event={"ID":"8275f0ab-08a1-4f0f-a942-41d2fb053e75","Type":"ContainerDied","Data":"463156cc05b552cd433d81bb7511512eb515505d092da09f42d53eb83de0d498"} Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.809801 5110 scope.go:117] "RemoveContainer" containerID="aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b" Jan 30 00:17:51 crc kubenswrapper[5110]: E0130 00:17:51.810247 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b\": container with ID starting with aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b not found: ID does not exist" containerID="aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.810284 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b"} err="failed to get container status \"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b\": rpc error: code = NotFound desc = could not find container \"aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b\": container with ID starting with aa7bf0c2f5837b10cf3b4e8270547f6daaba70a89d5b5fa9068ee4662eb49a7b not found: ID does not exist" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865123 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert\") pod \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865209 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config\") pod \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865348 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f5t4\" (UniqueName: \"kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4\") pod \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865396 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca\") pod \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865447 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp\") pod \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\" (UID: \"7d8f39dc-51ca-4f2b-bc6c-640b664d3e09\") " Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865554 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0df31c8-5b74-441c-9d62-9d59b9dee803-serving-cert\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865604 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bqv\" (UniqueName: \"kubernetes.io/projected/f0df31c8-5b74-441c-9d62-9d59b9dee803-kube-api-access-x8bqv\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865640 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0df31c8-5b74-441c-9d62-9d59b9dee803-tmp\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865682 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-client-ca\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.865698 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-config\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.866149 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp" (OuterVolumeSpecName: "tmp") pod "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" (UID: "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.866308 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config" (OuterVolumeSpecName: "config") pod "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" (UID: "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.866315 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" (UID: "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.873931 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" (UID: "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.874709 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4" (OuterVolumeSpecName: "kube-api-access-6f5t4") pod "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" (UID: "7d8f39dc-51ca-4f2b-bc6c-640b664d3e09"). InnerVolumeSpecName "kube-api-access-6f5t4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.969577 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0df31c8-5b74-441c-9d62-9d59b9dee803-tmp\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.969952 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-client-ca\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970007 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-config\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970216 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0df31c8-5b74-441c-9d62-9d59b9dee803-serving-cert\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970241 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0df31c8-5b74-441c-9d62-9d59b9dee803-tmp\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x8bqv\" (UniqueName: \"kubernetes.io/projected/f0df31c8-5b74-441c-9d62-9d59b9dee803-kube-api-access-x8bqv\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970605 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970642 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970661 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970671 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6f5t4\" (UniqueName: \"kubernetes.io/projected/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-kube-api-access-6f5t4\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.970684 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.971660 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-client-ca\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.972509 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df31c8-5b74-441c-9d62-9d59b9dee803-config\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.976394 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0df31c8-5b74-441c-9d62-9d59b9dee803-serving-cert\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.986213 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:51 crc kubenswrapper[5110]: I0130 00:17:51.988984 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8bqv\" (UniqueName: \"kubernetes.io/projected/f0df31c8-5b74-441c-9d62-9d59b9dee803-kube-api-access-x8bqv\") pod \"route-controller-manager-7fb57748c7-4ds4x\" (UID: \"f0df31c8-5b74-441c-9d62-9d59b9dee803\") " pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.042460 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.043823 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" containerName="controller-manager" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.043839 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" containerName="controller-manager" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.044034 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" containerName="controller-manager" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.053666 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.053830 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.071744 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.071815 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.071881 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7sc6\" (UniqueName: \"kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.071937 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.072053 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.072074 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca\") pod \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\" (UID: \"8275f0ab-08a1-4f0f-a942-41d2fb053e75\") " Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.073493 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca" (OuterVolumeSpecName: "client-ca") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.074049 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config" (OuterVolumeSpecName: "config") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.083722 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.085126 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6" (OuterVolumeSpecName: "kube-api-access-j7sc6") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "kube-api-access-j7sc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.085468 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.095797 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.095888 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp" (OuterVolumeSpecName: "tmp") pod "8275f0ab-08a1-4f0f-a942-41d2fb053e75" (UID: "8275f0ab-08a1-4f0f-a942-41d2fb053e75"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.151833 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.173419 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6ddfb8-l9t52"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174130 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-client-ca\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174184 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntqzl\" (UniqueName: \"kubernetes.io/projected/95c0be89-df54-412d-9153-aecf0c2cfafb-kube-api-access-ntqzl\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174219 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-config\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174263 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-proxy-ca-bundles\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174501 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0be89-df54-412d-9153-aecf0c2cfafb-serving-cert\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174634 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95c0be89-df54-412d-9153-aecf0c2cfafb-tmp\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174740 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174757 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8275f0ab-08a1-4f0f-a942-41d2fb053e75-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174769 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j7sc6\" (UniqueName: \"kubernetes.io/projected/8275f0ab-08a1-4f0f-a942-41d2fb053e75-kube-api-access-j7sc6\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174781 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174811 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8275f0ab-08a1-4f0f-a942-41d2fb053e75-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.174820 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8275f0ab-08a1-4f0f-a942-41d2fb053e75-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277048 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-proxy-ca-bundles\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277122 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0be89-df54-412d-9153-aecf0c2cfafb-serving-cert\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95c0be89-df54-412d-9153-aecf0c2cfafb-tmp\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277200 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-client-ca\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277221 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntqzl\" (UniqueName: \"kubernetes.io/projected/95c0be89-df54-412d-9153-aecf0c2cfafb-kube-api-access-ntqzl\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.277485 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-config\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.278450 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95c0be89-df54-412d-9153-aecf0c2cfafb-tmp\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.279381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-config\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.279592 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-proxy-ca-bundles\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.279626 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95c0be89-df54-412d-9153-aecf0c2cfafb-client-ca\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.289360 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0be89-df54-412d-9153-aecf0c2cfafb-serving-cert\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.313978 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntqzl\" (UniqueName: \"kubernetes.io/projected/95c0be89-df54-412d-9153-aecf0c2cfafb-kube-api-access-ntqzl\") pod \"controller-manager-5b6b546f9c-z7ff2\" (UID: \"95c0be89-df54-412d-9153-aecf0c2cfafb\") " pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.378513 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.398533 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.720790 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2"] Jan 30 00:17:52 crc kubenswrapper[5110]: W0130 00:17:52.730149 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95c0be89_df54_412d_9153_aecf0c2cfafb.slice/crio-072329c724d0d5437a8ac9e0ff8a2b77077e9ddb460d8bad96ba7f85c2355457 WatchSource:0}: Error finding container 072329c724d0d5437a8ac9e0ff8a2b77077e9ddb460d8bad96ba7f85c2355457: Status 404 returned error can't find the container with id 072329c724d0d5437a8ac9e0ff8a2b77077e9ddb460d8bad96ba7f85c2355457 Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.800797 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" event={"ID":"95c0be89-df54-412d-9153-aecf0c2cfafb","Type":"ContainerStarted","Data":"072329c724d0d5437a8ac9e0ff8a2b77077e9ddb460d8bad96ba7f85c2355457"} Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.802310 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" event={"ID":"f0df31c8-5b74-441c-9d62-9d59b9dee803","Type":"ContainerStarted","Data":"cce9076cf8595475fbe1ba740cd9b8e047413aabe3cf144db9ce92045c963bf4"} Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.802390 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" event={"ID":"f0df31c8-5b74-441c-9d62-9d59b9dee803","Type":"ContainerStarted","Data":"76aea2f2560400ad8716f431c32d57efa0dc261dcabf0adc9deca49ac7b399c5"} Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.802965 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.805634 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" event={"ID":"8275f0ab-08a1-4f0f-a942-41d2fb053e75","Type":"ContainerDied","Data":"a131827916d9e96f2fd0a55175b1e895dec469740054152386eee8c461679574"} Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.805657 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585d4d46c8-xdf8r" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.805711 5110 scope.go:117] "RemoveContainer" containerID="463156cc05b552cd433d81bb7511512eb515505d092da09f42d53eb83de0d498" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.831240 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" podStartSLOduration=1.8312200600000001 podStartE2EDuration="1.83122006s" podCreationTimestamp="2026-01-30 00:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:52.825897171 +0000 UTC m=+334.784133290" watchObservedRunningTime="2026-01-30 00:17:52.83122006 +0000 UTC m=+334.789456189" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.858895 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.865797 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-585d4d46c8-xdf8r"] Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.880954 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8f39dc-51ca-4f2b-bc6c-640b664d3e09" path="/var/lib/kubelet/pods/7d8f39dc-51ca-4f2b-bc6c-640b664d3e09/volumes" Jan 30 00:17:52 crc kubenswrapper[5110]: I0130 00:17:52.881958 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8275f0ab-08a1-4f0f-a942-41d2fb053e75" path="/var/lib/kubelet/pods/8275f0ab-08a1-4f0f-a942-41d2fb053e75/volumes" Jan 30 00:17:53 crc kubenswrapper[5110]: I0130 00:17:53.354955 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fb57748c7-4ds4x" Jan 30 00:17:53 crc kubenswrapper[5110]: I0130 00:17:53.815743 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" event={"ID":"95c0be89-df54-412d-9153-aecf0c2cfafb","Type":"ContainerStarted","Data":"b7c8cea7e1f114246d35cd0f4b8c04d4b14f8ee905a44651cb0625fe2e62d155"} Jan 30 00:17:53 crc kubenswrapper[5110]: I0130 00:17:53.816140 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:53 crc kubenswrapper[5110]: I0130 00:17:53.822948 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" Jan 30 00:17:53 crc kubenswrapper[5110]: I0130 00:17:53.867605 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b6b546f9c-z7ff2" podStartSLOduration=2.867577754 podStartE2EDuration="2.867577754s" podCreationTimestamp="2026-01-30 00:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:53.839156202 +0000 UTC m=+335.797392351" watchObservedRunningTime="2026-01-30 00:17:53.867577754 +0000 UTC m=+335.825813883" Jan 30 00:18:05 crc kubenswrapper[5110]: I0130 00:18:05.455080 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.753151 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.754747 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jbdwz" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="registry-server" containerID="cri-o://644609d40ef8ac6c06c92c064781ab68471a42256539125499c71aaab76db2cd" gracePeriod=30 Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.762961 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.763590 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bw6vt" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="registry-server" containerID="cri-o://14f7bcfd0285f42cde921c8b2c95b711d9197c4b60f9f0fc99905ad68265932f" gracePeriod=30 Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.778989 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.780225 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" containerID="cri-o://fde233465eb471f43615b4c8cd1939eb99565bf05bda67e97fd0da4c8fefce5e" gracePeriod=30 Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.793226 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.793651 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8l6l9" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="registry-server" containerID="cri-o://3abe7e1a0ebf8e2db3c06988f8757a5ef0c2a13cf351a5fc27a9890dac85006f" gracePeriod=30 Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.810593 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9bmtj"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.818007 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.825866 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.826240 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dksh5" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="registry-server" containerID="cri-o://c064f8dd3db3b76d883381c6dcde6de5459a062ce3a6e39d706e28ac9f9b0a1e" gracePeriod=30 Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.843676 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9bmtj"] Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.900765 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.900853 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.900987 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-tmp\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:21 crc kubenswrapper[5110]: I0130 00:18:21.901023 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx528\" (UniqueName: \"kubernetes.io/projected/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-kube-api-access-qx528\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.002889 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.002952 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.003012 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-tmp\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.003044 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qx528\" (UniqueName: \"kubernetes.io/projected/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-kube-api-access-qx528\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.004970 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-tmp\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.010981 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.015682 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.023381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx528\" (UniqueName: \"kubernetes.io/projected/7967f41f-db4e-44fc-bdbc-2b67566a8fd9-kube-api-access-qx528\") pod \"marketplace-operator-547dbd544d-9bmtj\" (UID: \"7967f41f-db4e-44fc-bdbc-2b67566a8fd9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.045543 5110 generic.go:358] "Generic (PLEG): container finished" podID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerID="14f7bcfd0285f42cde921c8b2c95b711d9197c4b60f9f0fc99905ad68265932f" exitCode=0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.045690 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerDied","Data":"14f7bcfd0285f42cde921c8b2c95b711d9197c4b60f9f0fc99905ad68265932f"} Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.048070 5110 generic.go:358] "Generic (PLEG): container finished" podID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerID="3abe7e1a0ebf8e2db3c06988f8757a5ef0c2a13cf351a5fc27a9890dac85006f" exitCode=0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.048304 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerDied","Data":"3abe7e1a0ebf8e2db3c06988f8757a5ef0c2a13cf351a5fc27a9890dac85006f"} Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.052521 5110 generic.go:358] "Generic (PLEG): container finished" podID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerID="644609d40ef8ac6c06c92c064781ab68471a42256539125499c71aaab76db2cd" exitCode=0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.052727 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerDied","Data":"644609d40ef8ac6c06c92c064781ab68471a42256539125499c71aaab76db2cd"} Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.054356 5110 generic.go:358] "Generic (PLEG): container finished" podID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerID="fde233465eb471f43615b4c8cd1939eb99565bf05bda67e97fd0da4c8fefce5e" exitCode=0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.054488 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerDied","Data":"fde233465eb471f43615b4c8cd1939eb99565bf05bda67e97fd0da4c8fefce5e"} Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.054575 5110 scope.go:117] "RemoveContainer" containerID="5c694b2e34bee137a9a92b625c41e7da7d318ea0fc9589befde5488183ef71b3" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.060313 5110 generic.go:358] "Generic (PLEG): container finished" podID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerID="c064f8dd3db3b76d883381c6dcde6de5459a062ce3a6e39d706e28ac9f9b0a1e" exitCode=0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.060573 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerDied","Data":"c064f8dd3db3b76d883381c6dcde6de5459a062ce3a6e39d706e28ac9f9b0a1e"} Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.139474 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.222472 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.306761 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content\") pod \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.306871 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities\") pod \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.306939 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z\") pod \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\" (UID: \"8b33ddbf-d5b6-42be-a4d1-978a794801eb\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.310285 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities" (OuterVolumeSpecName: "utilities") pod "8b33ddbf-d5b6-42be-a4d1-978a794801eb" (UID: "8b33ddbf-d5b6-42be-a4d1-978a794801eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.315900 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z" (OuterVolumeSpecName: "kube-api-access-c657z") pod "8b33ddbf-d5b6-42be-a4d1-978a794801eb" (UID: "8b33ddbf-d5b6-42be-a4d1-978a794801eb"). InnerVolumeSpecName "kube-api-access-c657z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.368945 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.388908 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b33ddbf-d5b6-42be-a4d1-978a794801eb" (UID: "8b33ddbf-d5b6-42be-a4d1-978a794801eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.396021 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.404820 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.408708 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics\") pod \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409552 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca\") pod \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409668 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp\") pod \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409717 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgqxx\" (UniqueName: \"kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx\") pod \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\" (UID: \"15cb8a86-a5a4-482f-9466-243fd0a2b4f0\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409742 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vqn5\" (UniqueName: \"kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5\") pod \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409781 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content\") pod \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.409828 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities\") pod \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\" (UID: \"4ef72b04-6d5e-47c5-ad83-fd680d001a38\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.410046 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.410060 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33ddbf-d5b6-42be-a4d1-978a794801eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.410069 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/8b33ddbf-d5b6-42be-a4d1-978a794801eb-kube-api-access-c657z\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.411454 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities" (OuterVolumeSpecName: "utilities") pod "4ef72b04-6d5e-47c5-ad83-fd680d001a38" (UID: "4ef72b04-6d5e-47c5-ad83-fd680d001a38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.413901 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "15cb8a86-a5a4-482f-9466-243fd0a2b4f0" (UID: "15cb8a86-a5a4-482f-9466-243fd0a2b4f0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.414172 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp" (OuterVolumeSpecName: "tmp") pod "15cb8a86-a5a4-482f-9466-243fd0a2b4f0" (UID: "15cb8a86-a5a4-482f-9466-243fd0a2b4f0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.418872 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "15cb8a86-a5a4-482f-9466-243fd0a2b4f0" (UID: "15cb8a86-a5a4-482f-9466-243fd0a2b4f0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.419739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5" (OuterVolumeSpecName: "kube-api-access-8vqn5") pod "4ef72b04-6d5e-47c5-ad83-fd680d001a38" (UID: "4ef72b04-6d5e-47c5-ad83-fd680d001a38"). InnerVolumeSpecName "kube-api-access-8vqn5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.420701 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx" (OuterVolumeSpecName: "kube-api-access-mgqxx") pod "15cb8a86-a5a4-482f-9466-243fd0a2b4f0" (UID: "15cb8a86-a5a4-482f-9466-243fd0a2b4f0"). InnerVolumeSpecName "kube-api-access-mgqxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.450540 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef72b04-6d5e-47c5-ad83-fd680d001a38" (UID: "4ef72b04-6d5e-47c5-ad83-fd680d001a38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513016 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content\") pod \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513113 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities\") pod \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513190 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9qp\" (UniqueName: \"kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp\") pod \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\" (UID: \"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513410 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513428 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513438 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mgqxx\" (UniqueName: \"kubernetes.io/projected/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-kube-api-access-mgqxx\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513448 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vqn5\" (UniqueName: \"kubernetes.io/projected/4ef72b04-6d5e-47c5-ad83-fd680d001a38-kube-api-access-8vqn5\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513458 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513466 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef72b04-6d5e-47c5-ad83-fd680d001a38-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.513475 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15cb8a86-a5a4-482f-9466-243fd0a2b4f0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.514742 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities" (OuterVolumeSpecName: "utilities") pod "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" (UID: "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.519511 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp" (OuterVolumeSpecName: "kube-api-access-cc9qp") pod "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" (UID: "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958"). InnerVolumeSpecName "kube-api-access-cc9qp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.519959 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.534028 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" (UID: "3623fe3d-6a68-4c3a-9c0f-c3eb381dd958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.614974 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8dg\" (UniqueName: \"kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg\") pod \"dbf09b72-11d4-49f3-977d-60a148c40caf\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.615164 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities\") pod \"dbf09b72-11d4-49f3-977d-60a148c40caf\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.615256 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content\") pod \"dbf09b72-11d4-49f3-977d-60a148c40caf\" (UID: \"dbf09b72-11d4-49f3-977d-60a148c40caf\") " Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.615574 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cc9qp\" (UniqueName: \"kubernetes.io/projected/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-kube-api-access-cc9qp\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.615604 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.615622 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.616889 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities" (OuterVolumeSpecName: "utilities") pod "dbf09b72-11d4-49f3-977d-60a148c40caf" (UID: "dbf09b72-11d4-49f3-977d-60a148c40caf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.619065 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg" (OuterVolumeSpecName: "kube-api-access-8z8dg") pod "dbf09b72-11d4-49f3-977d-60a148c40caf" (UID: "dbf09b72-11d4-49f3-977d-60a148c40caf"). InnerVolumeSpecName "kube-api-access-8z8dg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.693082 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9bmtj"] Jan 30 00:18:22 crc kubenswrapper[5110]: W0130 00:18:22.693768 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7967f41f_db4e_44fc_bdbc_2b67566a8fd9.slice/crio-6f22c1a934bddc164c8e6867758d59d635b464db3e8cb67ac629ac23c6a734a0 WatchSource:0}: Error finding container 6f22c1a934bddc164c8e6867758d59d635b464db3e8cb67ac629ac23c6a734a0: Status 404 returned error can't find the container with id 6f22c1a934bddc164c8e6867758d59d635b464db3e8cb67ac629ac23c6a734a0 Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.717395 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.717463 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8z8dg\" (UniqueName: \"kubernetes.io/projected/dbf09b72-11d4-49f3-977d-60a148c40caf-kube-api-access-8z8dg\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.752681 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbf09b72-11d4-49f3-977d-60a148c40caf" (UID: "dbf09b72-11d4-49f3-977d-60a148c40caf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:18:22 crc kubenswrapper[5110]: I0130 00:18:22.818636 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf09b72-11d4-49f3-977d-60a148c40caf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.073377 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw6vt" event={"ID":"8b33ddbf-d5b6-42be-a4d1-978a794801eb","Type":"ContainerDied","Data":"1d559c8131b65830d407465c5f1a3e352213683d387841cbfa6f18ffd4ad3e63"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.073526 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw6vt" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.073750 5110 scope.go:117] "RemoveContainer" containerID="14f7bcfd0285f42cde921c8b2c95b711d9197c4b60f9f0fc99905ad68265932f" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.076552 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l6l9" event={"ID":"3623fe3d-6a68-4c3a-9c0f-c3eb381dd958","Type":"ContainerDied","Data":"30c3dea5021f7d0321fdd82b3231c35bf8451d803701bab429b57d7bf9e0c2eb"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.077026 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l6l9" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.089374 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbdwz" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.089397 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbdwz" event={"ID":"4ef72b04-6d5e-47c5-ad83-fd680d001a38","Type":"ContainerDied","Data":"01b90203016c1a23c3ab1c7282035c0fc264427200b5369541dc1a34652e3a70"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.094489 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" event={"ID":"7967f41f-db4e-44fc-bdbc-2b67566a8fd9","Type":"ContainerStarted","Data":"4301981bd81093e1355838ed83e79ddb0a81cb5a77f19bb11f66f796dde028f2"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.094601 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" event={"ID":"7967f41f-db4e-44fc-bdbc-2b67566a8fd9","Type":"ContainerStarted","Data":"6f22c1a934bddc164c8e6867758d59d635b464db3e8cb67ac629ac23c6a734a0"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.097083 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.097479 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" event={"ID":"15cb8a86-a5a4-482f-9466-243fd0a2b4f0","Type":"ContainerDied","Data":"defcb9ae9bfda8979a7f29ddcb791c1f646ff5dadbb5434291bac726c774d343"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.097688 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kxkkt" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.098920 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9bmtj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.099019 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" podUID="7967f41f-db4e-44fc-bdbc-2b67566a8fd9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.102340 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dksh5" event={"ID":"dbf09b72-11d4-49f3-977d-60a148c40caf","Type":"ContainerDied","Data":"e682ae88aa4f1035781244adc67c82d9b1709f3cbae4422e9539d7c6ae97a675"} Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.102727 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dksh5" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.113424 5110 scope.go:117] "RemoveContainer" containerID="89f78f12ad4cb54cb9a5e4ffa9a9c36254bd80f391fd084eb7baa8acd255c331" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.125770 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.154984 5110 scope.go:117] "RemoveContainer" containerID="c087ecfda238bc31ff77ddcb9a405db1c7a7fa452b6a2e20bce155584c9ba5cd" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.163799 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bw6vt"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.174451 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.185386 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" podStartSLOduration=2.185367811 podStartE2EDuration="2.185367811s" podCreationTimestamp="2026-01-30 00:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:18:23.163676844 +0000 UTC m=+365.121912973" watchObservedRunningTime="2026-01-30 00:18:23.185367811 +0000 UTC m=+365.143603940" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.191619 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jbdwz"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.194015 5110 scope.go:117] "RemoveContainer" containerID="3abe7e1a0ebf8e2db3c06988f8757a5ef0c2a13cf351a5fc27a9890dac85006f" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.212401 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.216684 5110 scope.go:117] "RemoveContainer" containerID="5b5269c0afb2029ea80abbb6a3333651a81c6699209e3fb4d36cfbc453bd5a5f" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.219224 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kxkkt"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.231297 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.235644 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l6l9"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.242942 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.243003 5110 scope.go:117] "RemoveContainer" containerID="77275dd151930822d05a8ad3719cbe61b8509347ef73600f71ead598321c825a" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.248319 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dksh5"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.258364 5110 scope.go:117] "RemoveContainer" containerID="644609d40ef8ac6c06c92c064781ab68471a42256539125499c71aaab76db2cd" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.281996 5110 scope.go:117] "RemoveContainer" containerID="da718c48f01efcf8941d5cbbdeaf9f54153596156c51997e417de41a34470959" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.298630 5110 scope.go:117] "RemoveContainer" containerID="29686b3a7804cadf449422f4901fa2f9dc2a71e00f9e847e55ddce0fe5978885" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.315034 5110 scope.go:117] "RemoveContainer" containerID="fde233465eb471f43615b4c8cd1939eb99565bf05bda67e97fd0da4c8fefce5e" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.334697 5110 scope.go:117] "RemoveContainer" containerID="c064f8dd3db3b76d883381c6dcde6de5459a062ce3a6e39d706e28ac9f9b0a1e" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.349805 5110 scope.go:117] "RemoveContainer" containerID="5c1467ddb30c9770caa1a0b92e8397d0b20f00b66a2ede788b8e390b4d5ce0cf" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.383128 5110 scope.go:117] "RemoveContainer" containerID="214ead4c0c332a5ed61cfb3d10f9817595c4979b3f119038a17700bb2ee06e07" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.970524 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971712 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971730 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971748 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971756 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971767 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971775 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971785 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971793 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971804 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971812 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971822 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971829 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971844 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971852 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971865 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971873 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971888 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971895 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971909 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971916 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971941 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971949 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971958 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971966 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971977 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971986 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="extract-utilities" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.971997 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972005 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="extract-content" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972116 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972130 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972143 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972157 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972169 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" containerName="marketplace-operator" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.972183 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" containerName="registry-server" Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.988477 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:18:23 crc kubenswrapper[5110]: I0130 00:18:23.988660 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.004246 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.038914 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.038969 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.039019 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4nmf\" (UniqueName: \"kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.118098 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9bmtj" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.140118 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.140180 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.140249 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4nmf\" (UniqueName: \"kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.140823 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.141084 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.178981 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8znbs"] Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.187340 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4nmf\" (UniqueName: \"kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf\") pod \"redhat-marketplace-42v84\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.192884 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.199762 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.205400 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8znbs"] Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.241665 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-utilities\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.241753 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8f2x\" (UniqueName: \"kubernetes.io/projected/b24ec3cb-77b2-49fd-ae11-4c99a2020581-kube-api-access-v8f2x\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.242068 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-catalog-content\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.320182 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.344429 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-utilities\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.344543 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v8f2x\" (UniqueName: \"kubernetes.io/projected/b24ec3cb-77b2-49fd-ae11-4c99a2020581-kube-api-access-v8f2x\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.344992 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-utilities\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.347959 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-catalog-content\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.348616 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24ec3cb-77b2-49fd-ae11-4c99a2020581-catalog-content\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.381529 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8f2x\" (UniqueName: \"kubernetes.io/projected/b24ec3cb-77b2-49fd-ae11-4c99a2020581-kube-api-access-v8f2x\") pod \"redhat-operators-8znbs\" (UID: \"b24ec3cb-77b2-49fd-ae11-4c99a2020581\") " pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.514553 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.756501 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:18:24 crc kubenswrapper[5110]: W0130 00:18:24.766161 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47b09ee_8474_4439_8902_26b107135f5f.slice/crio-fcf14e55a5927db210190a170ebca7f764deebb8e88b4a86c6e93635d321f19e WatchSource:0}: Error finding container fcf14e55a5927db210190a170ebca7f764deebb8e88b4a86c6e93635d321f19e: Status 404 returned error can't find the container with id fcf14e55a5927db210190a170ebca7f764deebb8e88b4a86c6e93635d321f19e Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.881663 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cb8a86-a5a4-482f-9466-243fd0a2b4f0" path="/var/lib/kubelet/pods/15cb8a86-a5a4-482f-9466-243fd0a2b4f0/volumes" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.882929 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3623fe3d-6a68-4c3a-9c0f-c3eb381dd958" path="/var/lib/kubelet/pods/3623fe3d-6a68-4c3a-9c0f-c3eb381dd958/volumes" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.883755 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef72b04-6d5e-47c5-ad83-fd680d001a38" path="/var/lib/kubelet/pods/4ef72b04-6d5e-47c5-ad83-fd680d001a38/volumes" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.885100 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b33ddbf-d5b6-42be-a4d1-978a794801eb" path="/var/lib/kubelet/pods/8b33ddbf-d5b6-42be-a4d1-978a794801eb/volumes" Jan 30 00:18:24 crc kubenswrapper[5110]: I0130 00:18:24.886318 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf09b72-11d4-49f3-977d-60a148c40caf" path="/var/lib/kubelet/pods/dbf09b72-11d4-49f3-977d-60a148c40caf/volumes" Jan 30 00:18:25 crc kubenswrapper[5110]: I0130 00:18:25.026286 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8znbs"] Jan 30 00:18:25 crc kubenswrapper[5110]: W0130 00:18:25.042254 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb24ec3cb_77b2_49fd_ae11_4c99a2020581.slice/crio-3e0849ea7666577488ed177684d267fc71515e0b38fd8d87ed9e766e2bd64ca2 WatchSource:0}: Error finding container 3e0849ea7666577488ed177684d267fc71515e0b38fd8d87ed9e766e2bd64ca2: Status 404 returned error can't find the container with id 3e0849ea7666577488ed177684d267fc71515e0b38fd8d87ed9e766e2bd64ca2 Jan 30 00:18:25 crc kubenswrapper[5110]: I0130 00:18:25.126321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8znbs" event={"ID":"b24ec3cb-77b2-49fd-ae11-4c99a2020581","Type":"ContainerStarted","Data":"3e0849ea7666577488ed177684d267fc71515e0b38fd8d87ed9e766e2bd64ca2"} Jan 30 00:18:25 crc kubenswrapper[5110]: I0130 00:18:25.130156 5110 generic.go:358] "Generic (PLEG): container finished" podID="e47b09ee-8474-4439-8902-26b107135f5f" containerID="6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922" exitCode=0 Jan 30 00:18:25 crc kubenswrapper[5110]: I0130 00:18:25.130571 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerDied","Data":"6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922"} Jan 30 00:18:25 crc kubenswrapper[5110]: I0130 00:18:25.130671 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerStarted","Data":"fcf14e55a5927db210190a170ebca7f764deebb8e88b4a86c6e93635d321f19e"} Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.138483 5110 generic.go:358] "Generic (PLEG): container finished" podID="e47b09ee-8474-4439-8902-26b107135f5f" containerID="b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9" exitCode=0 Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.138553 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerDied","Data":"b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9"} Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.140871 5110 generic.go:358] "Generic (PLEG): container finished" podID="b24ec3cb-77b2-49fd-ae11-4c99a2020581" containerID="e4d7a0d8dc2b38a606fe1ceede1627b09b4e699f16337e2c2a8400cb331e984a" exitCode=0 Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.141014 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8znbs" event={"ID":"b24ec3cb-77b2-49fd-ae11-4c99a2020581","Type":"ContainerDied","Data":"e4d7a0d8dc2b38a606fe1ceede1627b09b4e699f16337e2c2a8400cb331e984a"} Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.367912 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxhq6"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.376194 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.380986 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.386529 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxhq6"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.484731 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm2q\" (UniqueName: \"kubernetes.io/projected/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-kube-api-access-6fm2q\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.484788 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-catalog-content\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.484835 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-utilities\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.564547 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nzp6n"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.576145 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.580766 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.585678 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzp6n"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.586017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6fm2q\" (UniqueName: \"kubernetes.io/projected/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-kube-api-access-6fm2q\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.586059 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-catalog-content\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.586095 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-utilities\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.586946 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-catalog-content\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.587596 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-utilities\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.625017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fm2q\" (UniqueName: \"kubernetes.io/projected/6b6ddc39-c7d9-4cc9-b843-c338baeb95f7-kube-api-access-6fm2q\") pod \"community-operators-wxhq6\" (UID: \"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7\") " pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.687207 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vw9\" (UniqueName: \"kubernetes.io/projected/5fbf6653-173e-4277-8c52-24d58ad8733a-kube-api-access-f8vw9\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.687276 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-catalog-content\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.687319 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-utilities\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.707767 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.788655 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vw9\" (UniqueName: \"kubernetes.io/projected/5fbf6653-173e-4277-8c52-24d58ad8733a-kube-api-access-f8vw9\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.788729 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-catalog-content\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.788915 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-utilities\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.789263 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-catalog-content\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.789459 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbf6653-173e-4277-8c52-24d58ad8733a-utilities\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.860339 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bmz2n"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.868162 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vw9\" (UniqueName: \"kubernetes.io/projected/5fbf6653-173e-4277-8c52-24d58ad8733a-kube-api-access-f8vw9\") pod \"certified-operators-nzp6n\" (UID: \"5fbf6653-173e-4277-8c52-24d58ad8733a\") " pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.870635 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.891998 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.894961 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bmz2n"] Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.992493 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-registry-certificates\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.992559 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.992703 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl4x\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-kube-api-access-mnl4x\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.992789 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-registry-tls\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.992960 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-trusted-ca\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.993204 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc468914-36d6-4569-ac7d-1819e318850b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.993293 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc468914-36d6-4569-ac7d-1819e318850b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:26 crc kubenswrapper[5110]: I0130 00:18:26.993354 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.042443 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096723 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-registry-certificates\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096776 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096806 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl4x\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-kube-api-access-mnl4x\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096827 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-registry-tls\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096868 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-trusted-ca\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096920 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc468914-36d6-4569-ac7d-1819e318850b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.096955 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc468914-36d6-4569-ac7d-1819e318850b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.100288 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-trusted-ca\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.100325 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc468914-36d6-4569-ac7d-1819e318850b-registry-certificates\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.101010 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc468914-36d6-4569-ac7d-1819e318850b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.106973 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-registry-tls\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.107100 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc468914-36d6-4569-ac7d-1819e318850b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.116006 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.120221 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl4x\" (UniqueName: \"kubernetes.io/projected/cc468914-36d6-4569-ac7d-1819e318850b-kube-api-access-mnl4x\") pod \"image-registry-5d9d95bf5b-bmz2n\" (UID: \"cc468914-36d6-4569-ac7d-1819e318850b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.152014 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerStarted","Data":"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169"} Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.158210 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8znbs" event={"ID":"b24ec3cb-77b2-49fd-ae11-4c99a2020581","Type":"ContainerStarted","Data":"47479b4d955cf48e95956a237ebb8d027c538fbba8a74bfe3eb9087db75a4e04"} Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.174924 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-42v84" podStartSLOduration=3.502628414 podStartE2EDuration="4.174904054s" podCreationTimestamp="2026-01-30 00:18:23 +0000 UTC" firstStartedPulling="2026-01-30 00:18:25.133187125 +0000 UTC m=+367.091423264" lastFinishedPulling="2026-01-30 00:18:25.805462735 +0000 UTC m=+367.763698904" observedRunningTime="2026-01-30 00:18:27.17319724 +0000 UTC m=+369.131433369" watchObservedRunningTime="2026-01-30 00:18:27.174904054 +0000 UTC m=+369.133140183" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.195664 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.233453 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxhq6"] Jan 30 00:18:27 crc kubenswrapper[5110]: W0130 00:18:27.238859 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b6ddc39_c7d9_4cc9_b843_c338baeb95f7.slice/crio-2f39771b9b082b4137daa04dd9243fa1f7b094ca1f2fc64ac29c5a171190d61c WatchSource:0}: Error finding container 2f39771b9b082b4137daa04dd9243fa1f7b094ca1f2fc64ac29c5a171190d61c: Status 404 returned error can't find the container with id 2f39771b9b082b4137daa04dd9243fa1f7b094ca1f2fc64ac29c5a171190d61c Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.362422 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzp6n"] Jan 30 00:18:27 crc kubenswrapper[5110]: I0130 00:18:27.660640 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bmz2n"] Jan 30 00:18:27 crc kubenswrapper[5110]: W0130 00:18:27.709690 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc468914_36d6_4569_ac7d_1819e318850b.slice/crio-5f9a9b13eb332900a012dc4defe0272d09a3c67cf22542d868476eb2726777f9 WatchSource:0}: Error finding container 5f9a9b13eb332900a012dc4defe0272d09a3c67cf22542d868476eb2726777f9: Status 404 returned error can't find the container with id 5f9a9b13eb332900a012dc4defe0272d09a3c67cf22542d868476eb2726777f9 Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.168333 5110 generic.go:358] "Generic (PLEG): container finished" podID="5fbf6653-173e-4277-8c52-24d58ad8733a" containerID="4ba3e0827a1787981e7deb1cf554bc794b0bd00f0bd9b9fb5ece4bd18e87f6be" exitCode=0 Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.168468 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzp6n" event={"ID":"5fbf6653-173e-4277-8c52-24d58ad8733a","Type":"ContainerDied","Data":"4ba3e0827a1787981e7deb1cf554bc794b0bd00f0bd9b9fb5ece4bd18e87f6be"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.168912 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzp6n" event={"ID":"5fbf6653-173e-4277-8c52-24d58ad8733a","Type":"ContainerStarted","Data":"3e65720ec1b1b2fa0aeae91aa6b6ab7b5dbfcfec0191d570786aedad4afb9f63"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.172897 5110 generic.go:358] "Generic (PLEG): container finished" podID="6b6ddc39-c7d9-4cc9-b843-c338baeb95f7" containerID="ee0379937fba5d376681b0c28478622660cc7b5d9189d8473bc138e4d5c73edc" exitCode=0 Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.173054 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhq6" event={"ID":"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7","Type":"ContainerDied","Data":"ee0379937fba5d376681b0c28478622660cc7b5d9189d8473bc138e4d5c73edc"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.174497 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhq6" event={"ID":"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7","Type":"ContainerStarted","Data":"2f39771b9b082b4137daa04dd9243fa1f7b094ca1f2fc64ac29c5a171190d61c"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.177883 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" event={"ID":"cc468914-36d6-4569-ac7d-1819e318850b","Type":"ContainerStarted","Data":"30cedbce51f51960a867519aa61e074711129e2bd84b54bca8737c49903c123e"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.177921 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" event={"ID":"cc468914-36d6-4569-ac7d-1819e318850b","Type":"ContainerStarted","Data":"5f9a9b13eb332900a012dc4defe0272d09a3c67cf22542d868476eb2726777f9"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.177980 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.179624 5110 generic.go:358] "Generic (PLEG): container finished" podID="b24ec3cb-77b2-49fd-ae11-4c99a2020581" containerID="47479b4d955cf48e95956a237ebb8d027c538fbba8a74bfe3eb9087db75a4e04" exitCode=0 Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.180740 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8znbs" event={"ID":"b24ec3cb-77b2-49fd-ae11-4c99a2020581","Type":"ContainerDied","Data":"47479b4d955cf48e95956a237ebb8d027c538fbba8a74bfe3eb9087db75a4e04"} Jan 30 00:18:28 crc kubenswrapper[5110]: I0130 00:18:28.221673 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" podStartSLOduration=2.22163216 podStartE2EDuration="2.22163216s" podCreationTimestamp="2026-01-30 00:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:18:28.209532234 +0000 UTC m=+370.167768403" watchObservedRunningTime="2026-01-30 00:18:28.22163216 +0000 UTC m=+370.179868329" Jan 30 00:18:29 crc kubenswrapper[5110]: I0130 00:18:29.190545 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8znbs" event={"ID":"b24ec3cb-77b2-49fd-ae11-4c99a2020581","Type":"ContainerStarted","Data":"55135dd680c75d8608a9c04efef8ef2989b147dd83fbdb5054768e1c3f13bf18"} Jan 30 00:18:29 crc kubenswrapper[5110]: I0130 00:18:29.195539 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzp6n" event={"ID":"5fbf6653-173e-4277-8c52-24d58ad8733a","Type":"ContainerStarted","Data":"f027bc4a0513933df0f96cabd58c1b2edbba3ef9d06b8db3082dc78cbe1a42d6"} Jan 30 00:18:29 crc kubenswrapper[5110]: I0130 00:18:29.209209 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8znbs" podStartSLOduration=4.528567871 podStartE2EDuration="5.209183098s" podCreationTimestamp="2026-01-30 00:18:24 +0000 UTC" firstStartedPulling="2026-01-30 00:18:26.142487523 +0000 UTC m=+368.100723652" lastFinishedPulling="2026-01-30 00:18:26.82310275 +0000 UTC m=+368.781338879" observedRunningTime="2026-01-30 00:18:29.207820062 +0000 UTC m=+371.166056231" watchObservedRunningTime="2026-01-30 00:18:29.209183098 +0000 UTC m=+371.167419227" Jan 30 00:18:30 crc kubenswrapper[5110]: I0130 00:18:30.205370 5110 generic.go:358] "Generic (PLEG): container finished" podID="5fbf6653-173e-4277-8c52-24d58ad8733a" containerID="f027bc4a0513933df0f96cabd58c1b2edbba3ef9d06b8db3082dc78cbe1a42d6" exitCode=0 Jan 30 00:18:30 crc kubenswrapper[5110]: I0130 00:18:30.205589 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzp6n" event={"ID":"5fbf6653-173e-4277-8c52-24d58ad8733a","Type":"ContainerDied","Data":"f027bc4a0513933df0f96cabd58c1b2edbba3ef9d06b8db3082dc78cbe1a42d6"} Jan 30 00:18:30 crc kubenswrapper[5110]: I0130 00:18:30.214654 5110 generic.go:358] "Generic (PLEG): container finished" podID="6b6ddc39-c7d9-4cc9-b843-c338baeb95f7" containerID="4d2d87bfccac3e9c5868ebafd03bee4dbe4e9da8cea37de92a8b906ff59bda9b" exitCode=0 Jan 30 00:18:30 crc kubenswrapper[5110]: I0130 00:18:30.214821 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhq6" event={"ID":"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7","Type":"ContainerDied","Data":"4d2d87bfccac3e9c5868ebafd03bee4dbe4e9da8cea37de92a8b906ff59bda9b"} Jan 30 00:18:31 crc kubenswrapper[5110]: I0130 00:18:31.227122 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhq6" event={"ID":"6b6ddc39-c7d9-4cc9-b843-c338baeb95f7","Type":"ContainerStarted","Data":"923e49ac75a8cd896fd9dc2f6b021803e24fd5f66acf41674a811580b609704a"} Jan 30 00:18:31 crc kubenswrapper[5110]: I0130 00:18:31.233400 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzp6n" event={"ID":"5fbf6653-173e-4277-8c52-24d58ad8733a","Type":"ContainerStarted","Data":"7ad75de2e96ebcef78a95a7bf01228b7070f949bc45548e53c565074ffaa06d5"} Jan 30 00:18:31 crc kubenswrapper[5110]: I0130 00:18:31.249055 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxhq6" podStartSLOduration=4.331376236 podStartE2EDuration="5.249035358s" podCreationTimestamp="2026-01-30 00:18:26 +0000 UTC" firstStartedPulling="2026-01-30 00:18:28.174010545 +0000 UTC m=+370.132246684" lastFinishedPulling="2026-01-30 00:18:29.091669677 +0000 UTC m=+371.049905806" observedRunningTime="2026-01-30 00:18:31.244663044 +0000 UTC m=+373.202899173" watchObservedRunningTime="2026-01-30 00:18:31.249035358 +0000 UTC m=+373.207271487" Jan 30 00:18:31 crc kubenswrapper[5110]: I0130 00:18:31.270942 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nzp6n" podStartSLOduration=4.393725136 podStartE2EDuration="5.27091879s" podCreationTimestamp="2026-01-30 00:18:26 +0000 UTC" firstStartedPulling="2026-01-30 00:18:28.169478657 +0000 UTC m=+370.127714786" lastFinishedPulling="2026-01-30 00:18:29.046672321 +0000 UTC m=+371.004908440" observedRunningTime="2026-01-30 00:18:31.267124751 +0000 UTC m=+373.225360900" watchObservedRunningTime="2026-01-30 00:18:31.27091879 +0000 UTC m=+373.229154919" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.320467 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.320870 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.390907 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.515645 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.515711 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:34 crc kubenswrapper[5110]: I0130 00:18:34.589583 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:35 crc kubenswrapper[5110]: I0130 00:18:35.308649 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:18:35 crc kubenswrapper[5110]: I0130 00:18:35.311551 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8znbs" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.708647 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.709392 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.779735 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.893564 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.893618 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:36 crc kubenswrapper[5110]: I0130 00:18:36.951640 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:37 crc kubenswrapper[5110]: I0130 00:18:37.339144 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxhq6" Jan 30 00:18:37 crc kubenswrapper[5110]: I0130 00:18:37.346394 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nzp6n" Jan 30 00:18:49 crc kubenswrapper[5110]: I0130 00:18:49.202722 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bmz2n" Jan 30 00:18:49 crc kubenswrapper[5110]: I0130 00:18:49.279773 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:19:09 crc kubenswrapper[5110]: I0130 00:19:09.210199 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:19:09 crc kubenswrapper[5110]: I0130 00:19:09.211447 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.339576 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" podUID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" containerName="registry" containerID="cri-o://ca4dcab40aef41094c6e5c4c440741457d0e9f0ef2477a1d684109e58bff866d" gracePeriod=30 Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.642776 5110 generic.go:358] "Generic (PLEG): container finished" podID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" containerID="ca4dcab40aef41094c6e5c4c440741457d0e9f0ef2477a1d684109e58bff866d" exitCode=0 Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.642973 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" event={"ID":"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71","Type":"ContainerDied","Data":"ca4dcab40aef41094c6e5c4c440741457d0e9f0ef2477a1d684109e58bff866d"} Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.764601 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.823939 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824076 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlpqv\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824208 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824325 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824427 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824830 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.824917 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token\") pod \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\" (UID: \"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71\") " Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.828735 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.839778 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.841096 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.847611 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.848191 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv" (OuterVolumeSpecName: "kube-api-access-qlpqv") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "kube-api-access-qlpqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.848806 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.858684 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.860901 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" (UID: "c9e7515a-7b79-4ad4-a08b-2a4133b7cd71"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928018 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928078 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928100 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928123 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlpqv\" (UniqueName: \"kubernetes.io/projected/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-kube-api-access-qlpqv\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928148 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928167 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:14 crc kubenswrapper[5110]: I0130 00:19:14.928185 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:15 crc kubenswrapper[5110]: I0130 00:19:15.654391 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" event={"ID":"c9e7515a-7b79-4ad4-a08b-2a4133b7cd71","Type":"ContainerDied","Data":"fafd3489479cd29b13872e5ddb61c8899368bf195547833dd2ff21a9f40d6d4d"} Jan 30 00:19:15 crc kubenswrapper[5110]: I0130 00:19:15.654423 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nh26b" Jan 30 00:19:15 crc kubenswrapper[5110]: I0130 00:19:15.654495 5110 scope.go:117] "RemoveContainer" containerID="ca4dcab40aef41094c6e5c4c440741457d0e9f0ef2477a1d684109e58bff866d" Jan 30 00:19:15 crc kubenswrapper[5110]: I0130 00:19:15.689775 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:19:15 crc kubenswrapper[5110]: I0130 00:19:15.694633 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nh26b"] Jan 30 00:19:16 crc kubenswrapper[5110]: I0130 00:19:16.884837 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" path="/var/lib/kubelet/pods/c9e7515a-7b79-4ad4-a08b-2a4133b7cd71/volumes" Jan 30 00:19:39 crc kubenswrapper[5110]: I0130 00:19:39.211110 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:19:39 crc kubenswrapper[5110]: I0130 00:19:39.212693 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.145667 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495540-hvbqm"] Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.147368 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.147388 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.147596 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9e7515a-7b79-4ad4-a08b-2a4133b7cd71" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.153195 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-hvbqm"] Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.153368 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.157558 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.160037 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.161180 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.310552 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gd6\" (UniqueName: \"kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6\") pod \"auto-csr-approver-29495540-hvbqm\" (UID: \"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a\") " pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.412853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gd6\" (UniqueName: \"kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6\") pod \"auto-csr-approver-29495540-hvbqm\" (UID: \"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a\") " pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.442901 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gd6\" (UniqueName: \"kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6\") pod \"auto-csr-approver-29495540-hvbqm\" (UID: \"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a\") " pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.477177 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:00 crc kubenswrapper[5110]: I0130 00:20:00.797550 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-hvbqm"] Jan 30 00:20:01 crc kubenswrapper[5110]: I0130 00:20:01.065538 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" event={"ID":"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a","Type":"ContainerStarted","Data":"d28f0db03b106f36b7da2e1f1f7abccf6cc1718e105cd074251e7d5f98fddd75"} Jan 30 00:20:05 crc kubenswrapper[5110]: I0130 00:20:05.099544 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" event={"ID":"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a","Type":"ContainerStarted","Data":"a22ee6c105bf272ad06c09b9ef04ca89b3f8b94bf4da2a257a230755508241f4"} Jan 30 00:20:05 crc kubenswrapper[5110]: I0130 00:20:05.123788 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" podStartSLOduration=1.497565007 podStartE2EDuration="5.123748961s" podCreationTimestamp="2026-01-30 00:20:00 +0000 UTC" firstStartedPulling="2026-01-30 00:20:00.80318882 +0000 UTC m=+462.761424959" lastFinishedPulling="2026-01-30 00:20:04.429372784 +0000 UTC m=+466.387608913" observedRunningTime="2026-01-30 00:20:05.119954514 +0000 UTC m=+467.078190693" watchObservedRunningTime="2026-01-30 00:20:05.123748961 +0000 UTC m=+467.081985130" Jan 30 00:20:05 crc kubenswrapper[5110]: I0130 00:20:05.355618 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-6xm94" Jan 30 00:20:05 crc kubenswrapper[5110]: I0130 00:20:05.393214 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-6xm94" Jan 30 00:20:06 crc kubenswrapper[5110]: I0130 00:20:06.111436 5110 generic.go:358] "Generic (PLEG): container finished" podID="fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" containerID="a22ee6c105bf272ad06c09b9ef04ca89b3f8b94bf4da2a257a230755508241f4" exitCode=0 Jan 30 00:20:06 crc kubenswrapper[5110]: I0130 00:20:06.111609 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" event={"ID":"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a","Type":"ContainerDied","Data":"a22ee6c105bf272ad06c09b9ef04ca89b3f8b94bf4da2a257a230755508241f4"} Jan 30 00:20:06 crc kubenswrapper[5110]: I0130 00:20:06.395159 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:05 +0000 UTC" deadline="2026-02-22 06:31:16.663288253 +0000 UTC" Jan 30 00:20:06 crc kubenswrapper[5110]: I0130 00:20:06.395235 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="558h11m10.268060033s" Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.395809 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:05 +0000 UTC" deadline="2026-02-23 15:45:46.966611556 +0000 UTC" Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.395864 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="591h25m39.570754462s" Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.435547 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.526493 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2gd6\" (UniqueName: \"kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6\") pod \"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a\" (UID: \"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a\") " Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.538167 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6" (OuterVolumeSpecName: "kube-api-access-b2gd6") pod "fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" (UID: "fcc630c8-ed9b-48ed-b521-ee1b36e22c0a"). InnerVolumeSpecName "kube-api-access-b2gd6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:07 crc kubenswrapper[5110]: I0130 00:20:07.628869 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b2gd6\" (UniqueName: \"kubernetes.io/projected/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a-kube-api-access-b2gd6\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:08 crc kubenswrapper[5110]: I0130 00:20:08.131064 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" Jan 30 00:20:08 crc kubenswrapper[5110]: I0130 00:20:08.131063 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-hvbqm" event={"ID":"fcc630c8-ed9b-48ed-b521-ee1b36e22c0a","Type":"ContainerDied","Data":"d28f0db03b106f36b7da2e1f1f7abccf6cc1718e105cd074251e7d5f98fddd75"} Jan 30 00:20:08 crc kubenswrapper[5110]: I0130 00:20:08.131256 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28f0db03b106f36b7da2e1f1f7abccf6cc1718e105cd074251e7d5f98fddd75" Jan 30 00:20:09 crc kubenswrapper[5110]: I0130 00:20:09.210903 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:09 crc kubenswrapper[5110]: I0130 00:20:09.211055 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:09 crc kubenswrapper[5110]: I0130 00:20:09.211150 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:20:09 crc kubenswrapper[5110]: I0130 00:20:09.212605 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3"} pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:20:09 crc kubenswrapper[5110]: I0130 00:20:09.212751 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" containerID="cri-o://c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3" gracePeriod=600 Jan 30 00:20:10 crc kubenswrapper[5110]: I0130 00:20:10.146514 5110 generic.go:358] "Generic (PLEG): container finished" podID="97dc714a-5d84-4c81-99ef-13067437fcad" containerID="c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3" exitCode=0 Jan 30 00:20:10 crc kubenswrapper[5110]: I0130 00:20:10.146587 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerDied","Data":"c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3"} Jan 30 00:20:10 crc kubenswrapper[5110]: I0130 00:20:10.147351 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f"} Jan 30 00:20:10 crc kubenswrapper[5110]: I0130 00:20:10.147386 5110 scope.go:117] "RemoveContainer" containerID="ab985dc6ebb821c594d5f79890013ae907f03697ba5299bb9059eba76bb5b13d" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.152435 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495542-fbwzv"] Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.154785 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.154814 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.155021 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.164306 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.165116 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-fbwzv"] Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.167686 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.168547 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.168730 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.291757 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xfxg\" (UniqueName: \"kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg\") pod \"auto-csr-approver-29495542-fbwzv\" (UID: \"a638d71c-d5c9-4d33-95c9-a7d38717c4e9\") " pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.393555 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xfxg\" (UniqueName: \"kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg\") pod \"auto-csr-approver-29495542-fbwzv\" (UID: \"a638d71c-d5c9-4d33-95c9-a7d38717c4e9\") " pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.417669 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xfxg\" (UniqueName: \"kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg\") pod \"auto-csr-approver-29495542-fbwzv\" (UID: \"a638d71c-d5c9-4d33-95c9-a7d38717c4e9\") " pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.494976 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:00 crc kubenswrapper[5110]: I0130 00:22:00.848549 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-fbwzv"] Jan 30 00:22:01 crc kubenswrapper[5110]: I0130 00:22:01.133896 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" event={"ID":"a638d71c-d5c9-4d33-95c9-a7d38717c4e9","Type":"ContainerStarted","Data":"6608384f31f9235adec6c9379e331827469078f18bb6a9a2286a4f2ed4bbf1fa"} Jan 30 00:22:03 crc kubenswrapper[5110]: I0130 00:22:03.163771 5110 generic.go:358] "Generic (PLEG): container finished" podID="a638d71c-d5c9-4d33-95c9-a7d38717c4e9" containerID="b365cfe6de4833bcfa813a60d74349c92d0c8b57c7b0ced0b779276c97f2ae30" exitCode=0 Jan 30 00:22:03 crc kubenswrapper[5110]: I0130 00:22:03.163893 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" event={"ID":"a638d71c-d5c9-4d33-95c9-a7d38717c4e9","Type":"ContainerDied","Data":"b365cfe6de4833bcfa813a60d74349c92d0c8b57c7b0ced0b779276c97f2ae30"} Jan 30 00:22:04 crc kubenswrapper[5110]: I0130 00:22:04.484387 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:04 crc kubenswrapper[5110]: I0130 00:22:04.676717 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfxg\" (UniqueName: \"kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg\") pod \"a638d71c-d5c9-4d33-95c9-a7d38717c4e9\" (UID: \"a638d71c-d5c9-4d33-95c9-a7d38717c4e9\") " Jan 30 00:22:04 crc kubenswrapper[5110]: I0130 00:22:04.688426 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg" (OuterVolumeSpecName: "kube-api-access-9xfxg") pod "a638d71c-d5c9-4d33-95c9-a7d38717c4e9" (UID: "a638d71c-d5c9-4d33-95c9-a7d38717c4e9"). InnerVolumeSpecName "kube-api-access-9xfxg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:04 crc kubenswrapper[5110]: I0130 00:22:04.778669 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xfxg\" (UniqueName: \"kubernetes.io/projected/a638d71c-d5c9-4d33-95c9-a7d38717c4e9-kube-api-access-9xfxg\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:04 crc kubenswrapper[5110]: E0130 00:22:04.978855 5110 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda638d71c_d5c9_4d33_95c9_a7d38717c4e9.slice/crio-6608384f31f9235adec6c9379e331827469078f18bb6a9a2286a4f2ed4bbf1fa\": RecentStats: unable to find data in memory cache]" Jan 30 00:22:05 crc kubenswrapper[5110]: I0130 00:22:05.184880 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" Jan 30 00:22:05 crc kubenswrapper[5110]: I0130 00:22:05.184922 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-fbwzv" event={"ID":"a638d71c-d5c9-4d33-95c9-a7d38717c4e9","Type":"ContainerDied","Data":"6608384f31f9235adec6c9379e331827469078f18bb6a9a2286a4f2ed4bbf1fa"} Jan 30 00:22:05 crc kubenswrapper[5110]: I0130 00:22:05.184989 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6608384f31f9235adec6c9379e331827469078f18bb6a9a2286a4f2ed4bbf1fa" Jan 30 00:22:09 crc kubenswrapper[5110]: I0130 00:22:09.211006 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:22:09 crc kubenswrapper[5110]: I0130 00:22:09.211135 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:19 crc kubenswrapper[5110]: I0130 00:22:19.221323 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:22:19 crc kubenswrapper[5110]: I0130 00:22:19.222836 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:22:39 crc kubenswrapper[5110]: I0130 00:22:39.211186 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:22:39 crc kubenswrapper[5110]: I0130 00:22:39.212221 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:48 crc kubenswrapper[5110]: I0130 00:22:48.817519 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx"] Jan 30 00:22:48 crc kubenswrapper[5110]: I0130 00:22:48.819225 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="kube-rbac-proxy" containerID="cri-o://26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" gracePeriod=30 Jan 30 00:22:48 crc kubenswrapper[5110]: I0130 00:22:48.819533 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="ovnkube-cluster-manager" containerID="cri-o://dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.030529 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.037453 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xdrfx"] Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038026 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-controller" containerID="cri-o://531d80f4432da2b8b09a05cf156a5afde04c2d29f2e77a15f3d8134940cb21b5" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038083 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="northd" containerID="cri-o://7460b800f35d430074709dfbb44364da98d88ad1209ee300d7bfd6c403e65a68" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038130 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="sbdb" containerID="cri-o://4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038168 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://344c3fae88cab9b3c695182ba5b3125c4bb651be76736410791242a9efc51abb" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038120 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-node" containerID="cri-o://0e7744345e7a304226006eddb988fdac7f93b2ffc2d953da5266ab7f9f8b2983" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038249 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-acl-logging" containerID="cri-o://14af1f5b9ba102050657728a106d998d5185fc102e772b9ddf9b7f98af2914c2" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.038260 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="nbdb" containerID="cri-o://a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.092452 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn"] Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093016 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="ovnkube-cluster-manager" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093034 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="ovnkube-cluster-manager" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093062 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="kube-rbac-proxy" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093074 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="kube-rbac-proxy" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093089 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a638d71c-d5c9-4d33-95c9-a7d38717c4e9" containerName="oc" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093097 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a638d71c-d5c9-4d33-95c9-a7d38717c4e9" containerName="oc" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093185 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="kube-rbac-proxy" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093200 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerName="ovnkube-cluster-manager" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.093207 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a638d71c-d5c9-4d33-95c9-a7d38717c4e9" containerName="oc" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.099764 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.109847 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovnkube-controller" containerID="cri-o://a8fbf8c2a126adc08588bc73603a0c7c14c966eea5a4489d3a1a47e87251e041" gracePeriod=30 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.131480 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9v2\" (UniqueName: \"kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2\") pod \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.131524 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides\") pod \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.131635 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert\") pod \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.131659 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config\") pod \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\" (UID: \"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.132643 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" (UID: "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.133074 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" (UID: "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.145766 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" (UID: "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.160537 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2" (OuterVolumeSpecName: "kube-api-access-rc9v2") pod "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" (UID: "a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7"). InnerVolumeSpecName "kube-api-access-rc9v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233325 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233499 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233632 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233692 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c58jn\" (UniqueName: \"kubernetes.io/projected/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-kube-api-access-c58jn\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233745 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rc9v2\" (UniqueName: \"kubernetes.io/projected/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-kube-api-access-rc9v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233756 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233766 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.233777 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.334995 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.335066 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.335116 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.335170 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c58jn\" (UniqueName: \"kubernetes.io/projected/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-kube-api-access-c58jn\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.336017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.336191 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.340179 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.363090 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c58jn\" (UniqueName: \"kubernetes.io/projected/ab640ed8-ade0-48e3-9a26-36bfa96c86e3-kube-api-access-c58jn\") pod \"ovnkube-control-plane-97c9b6c48-l5svn\" (UID: \"ab640ed8-ade0-48e3-9a26-36bfa96c86e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.417070 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.440760 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.549368 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" event={"ID":"ab640ed8-ade0-48e3-9a26-36bfa96c86e3","Type":"ContainerStarted","Data":"4bcdf6db11398a796a722b345087a364102f95cc1cebd91c2f1050f6b9b5ca57"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.553047 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v6j88_f47cb22d-f09e-43a7-95e0-0e1008827f08/kube-multus/0.log" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.553114 5110 generic.go:358] "Generic (PLEG): container finished" podID="f47cb22d-f09e-43a7-95e0-0e1008827f08" containerID="f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a" exitCode=2 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.553234 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v6j88" event={"ID":"f47cb22d-f09e-43a7-95e0-0e1008827f08","Type":"ContainerDied","Data":"f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.554158 5110 scope.go:117] "RemoveContainer" containerID="f4d0ee5002b11f26e942411886115848b57c8d30457511c01de10d7e61e1240a" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.566996 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-acl-logging/0.log" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.567811 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-controller/0.log" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568638 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="a8fbf8c2a126adc08588bc73603a0c7c14c966eea5a4489d3a1a47e87251e041" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568686 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568711 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"a8fbf8c2a126adc08588bc73603a0c7c14c966eea5a4489d3a1a47e87251e041"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568809 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568830 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568745 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568869 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="7460b800f35d430074709dfbb44364da98d88ad1209ee300d7bfd6c403e65a68" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568893 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="344c3fae88cab9b3c695182ba5b3125c4bb651be76736410791242a9efc51abb" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568908 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="0e7744345e7a304226006eddb988fdac7f93b2ffc2d953da5266ab7f9f8b2983" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568918 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="14af1f5b9ba102050657728a106d998d5185fc102e772b9ddf9b7f98af2914c2" exitCode=143 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.568929 5110 generic.go:358] "Generic (PLEG): container finished" podID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerID="531d80f4432da2b8b09a05cf156a5afde04c2d29f2e77a15f3d8134940cb21b5" exitCode=143 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.569195 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"7460b800f35d430074709dfbb44364da98d88ad1209ee300d7bfd6c403e65a68"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.569223 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"344c3fae88cab9b3c695182ba5b3125c4bb651be76736410791242a9efc51abb"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.569245 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"0e7744345e7a304226006eddb988fdac7f93b2ffc2d953da5266ab7f9f8b2983"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.569264 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"14af1f5b9ba102050657728a106d998d5185fc102e772b9ddf9b7f98af2914c2"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.569282 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"531d80f4432da2b8b09a05cf156a5afde04c2d29f2e77a15f3d8134940cb21b5"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576376 5110 generic.go:358] "Generic (PLEG): container finished" podID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerID="dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576420 5110 generic.go:358] "Generic (PLEG): container finished" podID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" containerID="26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" exitCode=0 Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576495 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerDied","Data":"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576534 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerDied","Data":"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576547 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" event={"ID":"a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7","Type":"ContainerDied","Data":"b5f6113d220beb4c2ed642925c6763d74296c34df30a5e7a11dbaee8ec6367a1"} Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576568 5110 scope.go:117] "RemoveContainer" containerID="dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.576764 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx" Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.592929 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404 is running failed: container process not found" containerID="4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.592937 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b is running failed: container process not found" containerID="a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.594064 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404 is running failed: container process not found" containerID="4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.594134 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b is running failed: container process not found" containerID="a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.594806 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404 is running failed: container process not found" containerID="4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.594912 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="sbdb" probeResult="unknown" Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.594956 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b is running failed: container process not found" containerID="a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.595046 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="nbdb" probeResult="unknown" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.617885 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx"] Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.618926 5110 scope.go:117] "RemoveContainer" containerID="26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.621717 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xfqbx"] Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.640523 5110 scope.go:117] "RemoveContainer" containerID="dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.641583 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\": container with ID starting with dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9 not found: ID does not exist" containerID="dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.641631 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9"} err="failed to get container status \"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\": rpc error: code = NotFound desc = could not find container \"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\": container with ID starting with dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9 not found: ID does not exist" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.641668 5110 scope.go:117] "RemoveContainer" containerID="26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" Jan 30 00:22:49 crc kubenswrapper[5110]: E0130 00:22:49.643045 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\": container with ID starting with 26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845 not found: ID does not exist" containerID="26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.643089 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845"} err="failed to get container status \"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\": rpc error: code = NotFound desc = could not find container \"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\": container with ID starting with 26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845 not found: ID does not exist" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.643120 5110 scope.go:117] "RemoveContainer" containerID="dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.643420 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9"} err="failed to get container status \"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\": rpc error: code = NotFound desc = could not find container \"dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9\": container with ID starting with dab2ee095fd83d0edbe7578b7827b5a43b6689d6e5a1d821e6c8d2787acc32d9 not found: ID does not exist" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.643465 5110 scope.go:117] "RemoveContainer" containerID="26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.643896 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845"} err="failed to get container status \"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\": rpc error: code = NotFound desc = could not find container \"26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845\": container with ID starting with 26957bb2e2f221a8e86625d8fda5978a0dba2d0cde57654bed8c841f587f8845 not found: ID does not exist" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.922463 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-acl-logging/0.log" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.925527 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-controller/0.log" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.926406 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994446 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994540 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994587 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994646 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994689 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994730 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994758 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994785 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994810 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994865 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sdgv\" (UniqueName: \"kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994899 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994930 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994961 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.994993 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.995051 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.995072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.995096 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.996929 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.997291 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.997421 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.997460 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.998286 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.998378 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.999009 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5110]: I0130 00:22:49.999623 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999682 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash" (OuterVolumeSpecName: "host-slash") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999726 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999772 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999810 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket" (OuterVolumeSpecName: "log-socket") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999851 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:49.999889 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.001529 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.001592 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.003930 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9lf6n"] Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004685 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovnkube-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004717 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovnkube-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004742 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-acl-logging" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004751 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-acl-logging" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004765 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-node" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004774 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-node" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004788 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004797 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004806 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004814 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004835 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="northd" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004842 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="northd" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004861 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kubecfg-setup" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004868 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kubecfg-setup" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004878 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="sbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004886 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="sbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004898 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="nbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.004905 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="nbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005011 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="northd" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005027 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovnkube-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005037 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="nbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005055 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-node" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005067 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="sbdb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005077 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005088 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-controller" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.005098 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" containerName="ovn-acl-logging" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.015888 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.017463 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv" (OuterVolumeSpecName: "kube-api-access-7sdgv") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "kube-api-access-7sdgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.024942 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097209 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097387 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log\") pod \"89a63cd7-c2e9-4666-a363-aa6f67187756\" (UID: \"89a63cd7-c2e9-4666-a363-aa6f67187756\") " Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097477 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-systemd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097516 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-config\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097547 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-script-lib\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097573 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-node-log\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097609 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-kubelet\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097633 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-slash\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097657 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-var-lib-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097681 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69fe1258-1a79-465a-b848-f205e924b6ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097712 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097751 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-log-socket\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097777 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-netns\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097798 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-netd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097824 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-env-overrides\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097860 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-etc-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097886 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-ovn\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097918 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-bin\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097949 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8hx\" (UniqueName: \"kubernetes.io/projected/69fe1258-1a79-465a-b848-f205e924b6ac-kube-api-access-xj8hx\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.097989 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-systemd-units\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098035 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098068 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098145 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098162 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89a63cd7-c2e9-4666-a363-aa6f67187756-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098174 5110 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098186 5110 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098198 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098210 5110 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098221 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89a63cd7-c2e9-4666-a363-aa6f67187756-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098234 5110 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098245 5110 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098256 5110 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098267 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7sdgv\" (UniqueName: \"kubernetes.io/projected/89a63cd7-c2e9-4666-a363-aa6f67187756-kube-api-access-7sdgv\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098280 5110 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098291 5110 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098303 5110 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098314 5110 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098325 5110 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098360 5110 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098370 5110 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098432 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.098458 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log" (OuterVolumeSpecName: "node-log") pod "89a63cd7-c2e9-4666-a363-aa6f67187756" (UID: "89a63cd7-c2e9-4666-a363-aa6f67187756"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199604 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199675 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-log-socket\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199701 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-netns\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199723 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-netd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199745 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-env-overrides\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199777 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-etc-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199805 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-ovn\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199831 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-bin\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199856 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8hx\" (UniqueName: \"kubernetes.io/projected/69fe1258-1a79-465a-b848-f205e924b6ac-kube-api-access-xj8hx\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199879 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199906 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-etc-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199914 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-log-socket\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199966 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-run-netns\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.199991 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-ovn\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200027 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-netd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200028 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-cni-bin\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200051 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-systemd-units\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200086 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200114 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200149 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-systemd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200156 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-systemd-units\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200187 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-config\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200201 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200227 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-script-lib\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200237 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200261 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-node-log\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200272 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-run-systemd\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200309 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-node-log\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200310 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-kubelet\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200351 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-kubelet\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200380 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-slash\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200404 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-var-lib-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200430 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69fe1258-1a79-465a-b848-f205e924b6ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200551 5110 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200651 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-var-lib-openvswitch\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200691 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69fe1258-1a79-465a-b848-f205e924b6ac-host-slash\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.200736 5110 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89a63cd7-c2e9-4666-a363-aa6f67187756-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.201394 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-env-overrides\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.201720 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-config\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.202057 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69fe1258-1a79-465a-b848-f205e924b6ac-ovnkube-script-lib\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.208478 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69fe1258-1a79-465a-b848-f205e924b6ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.222788 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8hx\" (UniqueName: \"kubernetes.io/projected/69fe1258-1a79-465a-b848-f205e924b6ac-kube-api-access-xj8hx\") pod \"ovnkube-node-9lf6n\" (UID: \"69fe1258-1a79-465a-b848-f205e924b6ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.391906 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:50 crc kubenswrapper[5110]: W0130 00:22:50.417764 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fe1258_1a79_465a_b848_f205e924b6ac.slice/crio-cad91b5d83bfb6c5c1d8bc00aa3b64bba5400f87d21ba38a3ce5dc9e730fe4c0 WatchSource:0}: Error finding container cad91b5d83bfb6c5c1d8bc00aa3b64bba5400f87d21ba38a3ce5dc9e730fe4c0: Status 404 returned error can't find the container with id cad91b5d83bfb6c5c1d8bc00aa3b64bba5400f87d21ba38a3ce5dc9e730fe4c0 Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.599694 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-acl-logging/0.log" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.601100 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xdrfx_89a63cd7-c2e9-4666-a363-aa6f67187756/ovn-controller/0.log" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.601995 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" event={"ID":"89a63cd7-c2e9-4666-a363-aa6f67187756","Type":"ContainerDied","Data":"081281f4c7a2623bf2b29f821f09a92bc6c1ce88e6948cc8e2ed4b30e6e60fc9"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.602081 5110 scope.go:117] "RemoveContainer" containerID="a8fbf8c2a126adc08588bc73603a0c7c14c966eea5a4489d3a1a47e87251e041" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.602110 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrfx" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.610076 5110 generic.go:358] "Generic (PLEG): container finished" podID="69fe1258-1a79-465a-b848-f205e924b6ac" containerID="100e2cc15772ac4318d82680e3500a03bcb8f4cc7086f1a7a396249fc94bcf25" exitCode=0 Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.610149 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerDied","Data":"100e2cc15772ac4318d82680e3500a03bcb8f4cc7086f1a7a396249fc94bcf25"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.610210 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"cad91b5d83bfb6c5c1d8bc00aa3b64bba5400f87d21ba38a3ce5dc9e730fe4c0"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.615667 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" event={"ID":"ab640ed8-ade0-48e3-9a26-36bfa96c86e3","Type":"ContainerStarted","Data":"589dbb6456482a5b876405cf08c26be67395d7fc87eb8bc761c76186e99cf292"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.615706 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" event={"ID":"ab640ed8-ade0-48e3-9a26-36bfa96c86e3","Type":"ContainerStarted","Data":"ed63ce7c0ef3e7708778ef12319fe6c00a37fa447c2581123f6fc07f12a2b10c"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.619987 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v6j88_f47cb22d-f09e-43a7-95e0-0e1008827f08/kube-multus/0.log" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.620151 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v6j88" event={"ID":"f47cb22d-f09e-43a7-95e0-0e1008827f08","Type":"ContainerStarted","Data":"2493dbe0ec7f58de9c4769f5485b85a9c92ad45475f8e44b137b35caeacbe47c"} Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.637281 5110 scope.go:117] "RemoveContainer" containerID="4589781a97e6b160eaf7bcc04011b6d5c228815262271785dcae48fbdc992404" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.666771 5110 scope.go:117] "RemoveContainer" containerID="a33309c25b8233de12dbeefa1d96150d130b8c7876c9d1e9ccb5a09164b5f07b" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.706097 5110 scope.go:117] "RemoveContainer" containerID="7460b800f35d430074709dfbb44364da98d88ad1209ee300d7bfd6c403e65a68" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.715902 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xdrfx"] Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.725764 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xdrfx"] Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.725855 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-l5svn" podStartSLOduration=2.725675312 podStartE2EDuration="2.725675312s" podCreationTimestamp="2026-01-30 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:22:50.718102698 +0000 UTC m=+632.676338837" watchObservedRunningTime="2026-01-30 00:22:50.725675312 +0000 UTC m=+632.683911441" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.743249 5110 scope.go:117] "RemoveContainer" containerID="344c3fae88cab9b3c695182ba5b3125c4bb651be76736410791242a9efc51abb" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.759629 5110 scope.go:117] "RemoveContainer" containerID="0e7744345e7a304226006eddb988fdac7f93b2ffc2d953da5266ab7f9f8b2983" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.774685 5110 scope.go:117] "RemoveContainer" containerID="14af1f5b9ba102050657728a106d998d5185fc102e772b9ddf9b7f98af2914c2" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.794110 5110 scope.go:117] "RemoveContainer" containerID="531d80f4432da2b8b09a05cf156a5afde04c2d29f2e77a15f3d8134940cb21b5" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.811222 5110 scope.go:117] "RemoveContainer" containerID="4664739c9ad7cce574291016b2470d3b429fb9e15ef8a9a0cdb2cdad75c352c1" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.883482 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a63cd7-c2e9-4666-a363-aa6f67187756" path="/var/lib/kubelet/pods/89a63cd7-c2e9-4666-a363-aa6f67187756/volumes" Jan 30 00:22:50 crc kubenswrapper[5110]: I0130 00:22:50.885481 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7" path="/var/lib/kubelet/pods/a3d2a9cb-9e52-4d43-8fe0-a05c49dfc8c7/volumes" Jan 30 00:22:51 crc kubenswrapper[5110]: I0130 00:22:51.636490 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"ccb35070a35ac6d174be968569c8b2488464552fd40e2ff940916259a795c742"} Jan 30 00:22:51 crc kubenswrapper[5110]: I0130 00:22:51.636564 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"00ab32923ed6f477cc46c7f9235bdf7e91a85d60614264bca6f8f704f559e4f8"} Jan 30 00:22:51 crc kubenswrapper[5110]: I0130 00:22:51.636588 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"bbe06cb2ca7e00427baab5b663bfcd55dcc41260a47be476a58120ba15f812fc"} Jan 30 00:22:51 crc kubenswrapper[5110]: I0130 00:22:51.636608 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"a9b95e5ab4c9ad45bb6e88b2813c4326536057ab64f10f167b7bfd1f82d1f02c"} Jan 30 00:22:51 crc kubenswrapper[5110]: I0130 00:22:51.636628 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"b8e68b1d2c56431793595d5cbdfc09b26d4027d6dd8b5527982b5057289f1da4"} Jan 30 00:22:52 crc kubenswrapper[5110]: I0130 00:22:52.653094 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"e038868a74e58a4cb8d569d5b7aa76731bd5c83951393963171e1cdf7d5aeab7"} Jan 30 00:22:54 crc kubenswrapper[5110]: I0130 00:22:54.684472 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"251d79b42bc0d73d6e58606b009983af0a3c3643b34bc375a2ca90124969ef40"} Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.706813 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" event={"ID":"69fe1258-1a79-465a-b848-f205e924b6ac","Type":"ContainerStarted","Data":"f7a9affa14d61e45972c4b6cbedbc6a1161586f823fa743686c6080a5704b3c6"} Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.708265 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.708301 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.708409 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.763705 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.764760 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" podStartSLOduration=7.764741615 podStartE2EDuration="7.764741615s" podCreationTimestamp="2026-01-30 00:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:22:56.756531094 +0000 UTC m=+638.714767233" watchObservedRunningTime="2026-01-30 00:22:56.764741615 +0000 UTC m=+638.722977754" Jan 30 00:22:56 crc kubenswrapper[5110]: I0130 00:22:56.765719 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.210663 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.211514 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.211584 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.212655 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f"} pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.212767 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" containerID="cri-o://373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f" gracePeriod=600 Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.824309 5110 generic.go:358] "Generic (PLEG): container finished" podID="97dc714a-5d84-4c81-99ef-13067437fcad" containerID="373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f" exitCode=0 Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.824400 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerDied","Data":"373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f"} Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.825477 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e"} Jan 30 00:23:09 crc kubenswrapper[5110]: I0130 00:23:09.825561 5110 scope.go:117] "RemoveContainer" containerID="c8425c40f95abba773bd525b2856a2ac875d752821b0130fdb9355c7edb391d3" Jan 30 00:23:28 crc kubenswrapper[5110]: I0130 00:23:28.756474 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9lf6n" Jan 30 00:23:51 crc kubenswrapper[5110]: I0130 00:23:51.486757 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:23:51 crc kubenswrapper[5110]: I0130 00:23:51.488253 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-42v84" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="registry-server" containerID="cri-o://0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169" gracePeriod=30 Jan 30 00:23:51 crc kubenswrapper[5110]: I0130 00:23:51.916315 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.059923 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4nmf\" (UniqueName: \"kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf\") pod \"e47b09ee-8474-4439-8902-26b107135f5f\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.060034 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content\") pod \"e47b09ee-8474-4439-8902-26b107135f5f\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.060142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities\") pod \"e47b09ee-8474-4439-8902-26b107135f5f\" (UID: \"e47b09ee-8474-4439-8902-26b107135f5f\") " Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.062306 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities" (OuterVolumeSpecName: "utilities") pod "e47b09ee-8474-4439-8902-26b107135f5f" (UID: "e47b09ee-8474-4439-8902-26b107135f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.068060 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf" (OuterVolumeSpecName: "kube-api-access-n4nmf") pod "e47b09ee-8474-4439-8902-26b107135f5f" (UID: "e47b09ee-8474-4439-8902-26b107135f5f"). InnerVolumeSpecName "kube-api-access-n4nmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.088467 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e47b09ee-8474-4439-8902-26b107135f5f" (UID: "e47b09ee-8474-4439-8902-26b107135f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.161901 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4nmf\" (UniqueName: \"kubernetes.io/projected/e47b09ee-8474-4439-8902-26b107135f5f-kube-api-access-n4nmf\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.161948 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.161961 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47b09ee-8474-4439-8902-26b107135f5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.191101 5110 generic.go:358] "Generic (PLEG): container finished" podID="e47b09ee-8474-4439-8902-26b107135f5f" containerID="0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169" exitCode=0 Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.191150 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerDied","Data":"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169"} Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.191185 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42v84" event={"ID":"e47b09ee-8474-4439-8902-26b107135f5f","Type":"ContainerDied","Data":"fcf14e55a5927db210190a170ebca7f764deebb8e88b4a86c6e93635d321f19e"} Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.191207 5110 scope.go:117] "RemoveContainer" containerID="0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.191262 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42v84" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.215962 5110 scope.go:117] "RemoveContainer" containerID="b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.239985 5110 scope.go:117] "RemoveContainer" containerID="6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.241978 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.246306 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-42v84"] Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.258805 5110 scope.go:117] "RemoveContainer" containerID="0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169" Jan 30 00:23:52 crc kubenswrapper[5110]: E0130 00:23:52.259363 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169\": container with ID starting with 0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169 not found: ID does not exist" containerID="0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.259421 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169"} err="failed to get container status \"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169\": rpc error: code = NotFound desc = could not find container \"0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169\": container with ID starting with 0090c9cefbd590b5ca931ffece2321d1ed8a1c6a1e4e74c9d236497ec7dec169 not found: ID does not exist" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.259462 5110 scope.go:117] "RemoveContainer" containerID="b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9" Jan 30 00:23:52 crc kubenswrapper[5110]: E0130 00:23:52.259873 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9\": container with ID starting with b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9 not found: ID does not exist" containerID="b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.259922 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9"} err="failed to get container status \"b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9\": rpc error: code = NotFound desc = could not find container \"b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9\": container with ID starting with b42594411ef6b3924d65867cd44db5e31e880564e84b1191f2d5bee4cb1b63d9 not found: ID does not exist" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.259954 5110 scope.go:117] "RemoveContainer" containerID="6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922" Jan 30 00:23:52 crc kubenswrapper[5110]: E0130 00:23:52.260180 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922\": container with ID starting with 6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922 not found: ID does not exist" containerID="6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.260216 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922"} err="failed to get container status \"6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922\": rpc error: code = NotFound desc = could not find container \"6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922\": container with ID starting with 6905aa076471e8340fc487e07c9d70cf1dea20620aead45e62c5f18d283f5922 not found: ID does not exist" Jan 30 00:23:52 crc kubenswrapper[5110]: I0130 00:23:52.884818 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47b09ee-8474-4439-8902-26b107135f5f" path="/var/lib/kubelet/pods/e47b09ee-8474-4439-8902-26b107135f5f/volumes" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.402615 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt"] Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404663 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="extract-utilities" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404707 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="extract-utilities" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404757 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="registry-server" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404770 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="registry-server" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404790 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="extract-content" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404803 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="extract-content" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.404969 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e47b09ee-8474-4439-8902-26b107135f5f" containerName="registry-server" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.420243 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt"] Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.420605 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.424531 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.520657 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.520729 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.520972 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27chj\" (UniqueName: \"kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.622608 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.622691 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.622755 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27chj\" (UniqueName: \"kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.623825 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.624206 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.648150 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27chj\" (UniqueName: \"kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:55 crc kubenswrapper[5110]: I0130 00:23:55.752237 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:23:56 crc kubenswrapper[5110]: I0130 00:23:56.252977 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt"] Jan 30 00:23:57 crc kubenswrapper[5110]: I0130 00:23:57.234402 5110 generic.go:358] "Generic (PLEG): container finished" podID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerID="0c09ff1e02c0affd889bdc38d000717109016563446cf72df11c0a7f90e2dc53" exitCode=0 Jan 30 00:23:57 crc kubenswrapper[5110]: I0130 00:23:57.234556 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" event={"ID":"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a","Type":"ContainerDied","Data":"0c09ff1e02c0affd889bdc38d000717109016563446cf72df11c0a7f90e2dc53"} Jan 30 00:23:57 crc kubenswrapper[5110]: I0130 00:23:57.235246 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" event={"ID":"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a","Type":"ContainerStarted","Data":"87123c7081c603063e9f3e7609bbc4ab1f0e20a73945a57b2d395a13e44df5a2"} Jan 30 00:23:59 crc kubenswrapper[5110]: I0130 00:23:59.254362 5110 generic.go:358] "Generic (PLEG): container finished" podID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerID="4db4f06e471bec6918d818631281cc50bda31ea3e4baaa764365350730b8c410" exitCode=0 Jan 30 00:23:59 crc kubenswrapper[5110]: I0130 00:23:59.254421 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" event={"ID":"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a","Type":"ContainerDied","Data":"4db4f06e471bec6918d818631281cc50bda31ea3e4baaa764365350730b8c410"} Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.148527 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495544-hvwrh"] Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.153021 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.155177 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.155815 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.156089 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.165326 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-hvwrh"] Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.201448 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8tn\" (UniqueName: \"kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn\") pod \"auto-csr-approver-29495544-hvwrh\" (UID: \"931d619a-309f-4ffe-bc9f-93097cdf6afe\") " pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.267110 5110 generic.go:358] "Generic (PLEG): container finished" podID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerID="2338bdcdb0edb546f02e9e24a2670ea053a6df3c5d57b121059ed012e90e282d" exitCode=0 Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.267188 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" event={"ID":"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a","Type":"ContainerDied","Data":"2338bdcdb0edb546f02e9e24a2670ea053a6df3c5d57b121059ed012e90e282d"} Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.303250 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8tn\" (UniqueName: \"kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn\") pod \"auto-csr-approver-29495544-hvwrh\" (UID: \"931d619a-309f-4ffe-bc9f-93097cdf6afe\") " pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.343862 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8tn\" (UniqueName: \"kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn\") pod \"auto-csr-approver-29495544-hvwrh\" (UID: \"931d619a-309f-4ffe-bc9f-93097cdf6afe\") " pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.480788 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:00 crc kubenswrapper[5110]: I0130 00:24:00.758941 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-hvwrh"] Jan 30 00:24:00 crc kubenswrapper[5110]: W0130 00:24:00.763145 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod931d619a_309f_4ffe_bc9f_93097cdf6afe.slice/crio-1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782 WatchSource:0}: Error finding container 1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782: Status 404 returned error can't find the container with id 1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782 Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.278791 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" event={"ID":"931d619a-309f-4ffe-bc9f-93097cdf6afe","Type":"ContainerStarted","Data":"1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782"} Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.611062 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.727160 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util\") pod \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.727229 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27chj\" (UniqueName: \"kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj\") pod \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.727448 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle\") pod \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\" (UID: \"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a\") " Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.731659 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle" (OuterVolumeSpecName: "bundle") pod "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" (UID: "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.739557 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj" (OuterVolumeSpecName: "kube-api-access-27chj") pod "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" (UID: "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a"). InnerVolumeSpecName "kube-api-access-27chj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.829562 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:01 crc kubenswrapper[5110]: I0130 00:24:01.829616 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27chj\" (UniqueName: \"kubernetes.io/projected/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-kube-api-access-27chj\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.049615 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util" (OuterVolumeSpecName: "util") pod "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" (UID: "c7df63d4-15bd-4b81-b3bf-cf9fe51d275a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.138918 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7df63d4-15bd-4b81-b3bf-cf9fe51d275a-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.289830 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" event={"ID":"931d619a-309f-4ffe-bc9f-93097cdf6afe","Type":"ContainerStarted","Data":"818ede0477bdf0de808b61b61fe20baecb4d1e551b6b252700eeeea6e1fec40b"} Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.293619 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" event={"ID":"c7df63d4-15bd-4b81-b3bf-cf9fe51d275a","Type":"ContainerDied","Data":"87123c7081c603063e9f3e7609bbc4ab1f0e20a73945a57b2d395a13e44df5a2"} Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.293663 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87123c7081c603063e9f3e7609bbc4ab1f0e20a73945a57b2d395a13e44df5a2" Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.293704 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt" Jan 30 00:24:02 crc kubenswrapper[5110]: I0130 00:24:02.317183 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" podStartSLOduration=1.519165951 podStartE2EDuration="2.317156019s" podCreationTimestamp="2026-01-30 00:24:00 +0000 UTC" firstStartedPulling="2026-01-30 00:24:00.764763771 +0000 UTC m=+702.722999900" lastFinishedPulling="2026-01-30 00:24:01.562753809 +0000 UTC m=+703.520989968" observedRunningTime="2026-01-30 00:24:02.312697634 +0000 UTC m=+704.270933803" watchObservedRunningTime="2026-01-30 00:24:02.317156019 +0000 UTC m=+704.275392188" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.304512 5110 generic.go:358] "Generic (PLEG): container finished" podID="931d619a-309f-4ffe-bc9f-93097cdf6afe" containerID="818ede0477bdf0de808b61b61fe20baecb4d1e551b6b252700eeeea6e1fec40b" exitCode=0 Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.304808 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" event={"ID":"931d619a-309f-4ffe-bc9f-93097cdf6afe","Type":"ContainerDied","Data":"818ede0477bdf0de808b61b61fe20baecb4d1e551b6b252700eeeea6e1fec40b"} Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.404647 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr"] Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405759 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="util" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405791 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="util" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405813 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="pull" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405826 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="pull" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405913 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="extract" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.405927 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="extract" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.406122 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7df63d4-15bd-4b81-b3bf-cf9fe51d275a" containerName="extract" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.414225 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.419098 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.422410 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr"] Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.568765 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb4fr\" (UniqueName: \"kubernetes.io/projected/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-kube-api-access-sb4fr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.568972 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.569624 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.671496 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.671597 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.671676 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sb4fr\" (UniqueName: \"kubernetes.io/projected/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-kube-api-access-sb4fr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.672590 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.673054 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.708400 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb4fr\" (UniqueName: \"kubernetes.io/projected/320c163c-8d94-4ca5-a17d-b0f3dcc0aa73-kube-api-access-sb4fr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr\" (UID: \"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:03 crc kubenswrapper[5110]: I0130 00:24:03.778645 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.075779 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr"] Jan 30 00:24:04 crc kubenswrapper[5110]: W0130 00:24:04.092205 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320c163c_8d94_4ca5_a17d_b0f3dcc0aa73.slice/crio-17338750867f5dca1aec1b830ccf31fcfc273e335dea8b5e3db706bdee334232 WatchSource:0}: Error finding container 17338750867f5dca1aec1b830ccf31fcfc273e335dea8b5e3db706bdee334232: Status 404 returned error can't find the container with id 17338750867f5dca1aec1b830ccf31fcfc273e335dea8b5e3db706bdee334232 Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.182546 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns"] Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.202487 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns"] Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.202741 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.282030 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z88w\" (UniqueName: \"kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.282113 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.282167 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.315501 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" event={"ID":"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73","Type":"ContainerStarted","Data":"c60ec31c729a1bc69f6887be24fe21c034df7965b8d480a8da4f19f33b8ecd95"} Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.315831 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" event={"ID":"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73","Type":"ContainerStarted","Data":"17338750867f5dca1aec1b830ccf31fcfc273e335dea8b5e3db706bdee334232"} Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.386511 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.386631 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.386808 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2z88w\" (UniqueName: \"kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.387184 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.387321 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.435603 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z88w\" (UniqueName: \"kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.525376 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.543828 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.691052 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh8tn\" (UniqueName: \"kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn\") pod \"931d619a-309f-4ffe-bc9f-93097cdf6afe\" (UID: \"931d619a-309f-4ffe-bc9f-93097cdf6afe\") " Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.698608 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn" (OuterVolumeSpecName: "kube-api-access-zh8tn") pod "931d619a-309f-4ffe-bc9f-93097cdf6afe" (UID: "931d619a-309f-4ffe-bc9f-93097cdf6afe"). InnerVolumeSpecName "kube-api-access-zh8tn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.793324 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zh8tn\" (UniqueName: \"kubernetes.io/projected/931d619a-309f-4ffe-bc9f-93097cdf6afe-kube-api-access-zh8tn\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:04 crc kubenswrapper[5110]: I0130 00:24:04.998477 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns"] Jan 30 00:24:05 crc kubenswrapper[5110]: W0130 00:24:05.006929 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf6d0ef_21b9_4c57_a8b5_67230aa296d2.slice/crio-566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460 WatchSource:0}: Error finding container 566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460: Status 404 returned error can't find the container with id 566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460 Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.346736 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" event={"ID":"931d619a-309f-4ffe-bc9f-93097cdf6afe","Type":"ContainerDied","Data":"1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782"} Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.347298 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1141c058a2c4dc4a824267c9843b9df87129381137972c4156a0ee81a126d782" Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.347206 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-hvwrh" Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.351392 5110 generic.go:358] "Generic (PLEG): container finished" podID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerID="a83b1698939b16c5ad7b6d97079b7f3a337fde9705c26fa60f7e48af5bfd0617" exitCode=0 Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.352115 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" event={"ID":"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2","Type":"ContainerDied","Data":"a83b1698939b16c5ad7b6d97079b7f3a337fde9705c26fa60f7e48af5bfd0617"} Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.352190 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" event={"ID":"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2","Type":"ContainerStarted","Data":"566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460"} Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.355001 5110 generic.go:358] "Generic (PLEG): container finished" podID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" containerID="c60ec31c729a1bc69f6887be24fe21c034df7965b8d480a8da4f19f33b8ecd95" exitCode=0 Jan 30 00:24:05 crc kubenswrapper[5110]: I0130 00:24:05.355195 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" event={"ID":"320c163c-8d94-4ca5-a17d-b0f3dcc0aa73","Type":"ContainerDied","Data":"c60ec31c729a1bc69f6887be24fe21c034df7965b8d480a8da4f19f33b8ecd95"} Jan 30 00:24:05 crc kubenswrapper[5110]: E0130 00:24:05.405167 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:05 crc kubenswrapper[5110]: E0130 00:24:05.405523 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:05 crc kubenswrapper[5110]: E0130 00:24:05.407582 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:24:06 crc kubenswrapper[5110]: E0130 00:24:06.364756 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:24:07 crc kubenswrapper[5110]: I0130 00:24:07.371365 5110 generic.go:358] "Generic (PLEG): container finished" podID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerID="76ca3fa6e60385d17178d7a3b9e1f3994f31c9a9a19aa2226afef5685e78c074" exitCode=0 Jan 30 00:24:07 crc kubenswrapper[5110]: I0130 00:24:07.371493 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" event={"ID":"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2","Type":"ContainerDied","Data":"76ca3fa6e60385d17178d7a3b9e1f3994f31c9a9a19aa2226afef5685e78c074"} Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.021008 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v"] Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.022462 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="931d619a-309f-4ffe-bc9f-93097cdf6afe" containerName="oc" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.022489 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="931d619a-309f-4ffe-bc9f-93097cdf6afe" containerName="oc" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.022611 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="931d619a-309f-4ffe-bc9f-93097cdf6afe" containerName="oc" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.027498 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.061659 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v"] Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.150795 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jdj\" (UniqueName: \"kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.150908 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.150984 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.252058 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.252132 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jdj\" (UniqueName: \"kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.252170 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.252694 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.252720 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.292027 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jdj\" (UniqueName: \"kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.340451 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.399479 5110 generic.go:358] "Generic (PLEG): container finished" podID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerID="a5362843908c8e30e18b8ff1143ee9ac2e5f8f5440c75b4446f8d80aaf53747f" exitCode=0 Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.399575 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" event={"ID":"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2","Type":"ContainerDied","Data":"a5362843908c8e30e18b8ff1143ee9ac2e5f8f5440c75b4446f8d80aaf53747f"} Jan 30 00:24:08 crc kubenswrapper[5110]: I0130 00:24:08.657663 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v"] Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.409359 5110 generic.go:358] "Generic (PLEG): container finished" podID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerID="9930750d74021264d8fc07d64cd373adc1d2ca54c78c582173c5d145486211c5" exitCode=0 Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.410911 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" event={"ID":"4b38b889-3a80-4a92-ac57-00460c3dc1e6","Type":"ContainerDied","Data":"9930750d74021264d8fc07d64cd373adc1d2ca54c78c582173c5d145486211c5"} Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.410992 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" event={"ID":"4b38b889-3a80-4a92-ac57-00460c3dc1e6","Type":"ContainerStarted","Data":"9526266df6196be57722d02317ea545c78fe7f3828e6347db0f968fdeedde081"} Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.766823 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.792675 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z88w\" (UniqueName: \"kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w\") pod \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.792750 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle\") pod \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.792799 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util\") pod \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\" (UID: \"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2\") " Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.793572 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle" (OuterVolumeSpecName: "bundle") pod "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" (UID: "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.807784 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w" (OuterVolumeSpecName: "kube-api-access-2z88w") pod "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" (UID: "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2"). InnerVolumeSpecName "kube-api-access-2z88w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.817544 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util" (OuterVolumeSpecName: "util") pod "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" (UID: "fcf6d0ef-21b9-4c57-a8b5-67230aa296d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.894297 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2z88w\" (UniqueName: \"kubernetes.io/projected/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-kube-api-access-2z88w\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.894367 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:09 crc kubenswrapper[5110]: I0130 00:24:09.894378 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcf6d0ef-21b9-4c57-a8b5-67230aa296d2-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:10 crc kubenswrapper[5110]: I0130 00:24:10.418008 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" event={"ID":"fcf6d0ef-21b9-4c57-a8b5-67230aa296d2","Type":"ContainerDied","Data":"566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460"} Jan 30 00:24:10 crc kubenswrapper[5110]: I0130 00:24:10.418071 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="566b8887257e6e4c255bb25496b7e11b8274526bed95faf751b969855d9f1460" Jan 30 00:24:10 crc kubenswrapper[5110]: I0130 00:24:10.418186 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns" Jan 30 00:24:14 crc kubenswrapper[5110]: I0130 00:24:14.442941 5110 generic.go:358] "Generic (PLEG): container finished" podID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerID="88d917091e6309389a5f80da5a0335d770f3c3de199f37d8f83a5a22260b268c" exitCode=0 Jan 30 00:24:14 crc kubenswrapper[5110]: I0130 00:24:14.442998 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" event={"ID":"4b38b889-3a80-4a92-ac57-00460c3dc1e6","Type":"ContainerDied","Data":"88d917091e6309389a5f80da5a0335d770f3c3de199f37d8f83a5a22260b268c"} Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.252766 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253757 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="pull" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253775 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="pull" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253788 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="util" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253795 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="util" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253811 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="extract" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253818 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="extract" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.253931 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcf6d0ef-21b9-4c57-a8b5-67230aa296d2" containerName="extract" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.258733 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.260284 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-lsgxc\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.261968 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.262465 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.262651 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.293645 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.303181 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.306757 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-vvdtj\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.307133 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.310689 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.324488 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.324670 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.331371 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.364262 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.364375 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.364417 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.364444 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.364519 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mn68\" (UniqueName: \"kubernetes.io/projected/c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995-kube-api-access-9mn68\") pod \"obo-prometheus-operator-9bc85b4bf-j6qvs\" (UID: \"c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.453417 5110 generic.go:358] "Generic (PLEG): container finished" podID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerID="5b776ab079618248f0d9bd4eb671a8d34d5cdd1aae4e1d983e53c3a12ecfe289" exitCode=0 Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.453650 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" event={"ID":"4b38b889-3a80-4a92-ac57-00460c3dc1e6","Type":"ContainerDied","Data":"5b776ab079618248f0d9bd4eb671a8d34d5cdd1aae4e1d983e53c3a12ecfe289"} Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.466115 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9mn68\" (UniqueName: \"kubernetes.io/projected/c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995-kube-api-access-9mn68\") pod \"obo-prometheus-operator-9bc85b4bf-j6qvs\" (UID: \"c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.466187 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.466224 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.466252 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.466272 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.476381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.477145 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.482252 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee77ccd5-299e-47f9-ba9b-26e406040a34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx\" (UID: \"ee77ccd5-299e-47f9-ba9b-26e406040a34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.482502 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40f000fa-a4e9-4f45-a846-707d5b5b1643-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6\" (UID: \"40f000fa-a4e9-4f45-a846-707d5b5b1643\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.498084 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mn68\" (UniqueName: \"kubernetes.io/projected/c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995-kube-api-access-9mn68\") pod \"obo-prometheus-operator-9bc85b4bf-j6qvs\" (UID: \"c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.505356 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4zx86"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.513887 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.515999 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-s8pls\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.519023 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.520527 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4zx86"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.567965 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmhz\" (UniqueName: \"kubernetes.io/projected/897cae54-4f71-44e7-a9ee-1ef4558e0432-kube-api-access-jkmhz\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.568292 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/897cae54-4f71-44e7-a9ee-1ef4558e0432-observability-operator-tls\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.576443 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.618580 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.651197 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.669643 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkmhz\" (UniqueName: \"kubernetes.io/projected/897cae54-4f71-44e7-a9ee-1ef4558e0432-kube-api-access-jkmhz\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.669726 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/897cae54-4f71-44e7-a9ee-1ef4558e0432-observability-operator-tls\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.674789 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-88w5t"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.683302 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/897cae54-4f71-44e7-a9ee-1ef4558e0432-observability-operator-tls\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.694510 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.700323 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-szrnq\"" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.726526 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-88w5t"] Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.769678 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkmhz\" (UniqueName: \"kubernetes.io/projected/897cae54-4f71-44e7-a9ee-1ef4558e0432-kube-api-access-jkmhz\") pod \"observability-operator-85c68dddb-4zx86\" (UID: \"897cae54-4f71-44e7-a9ee-1ef4558e0432\") " pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.771239 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvghx\" (UniqueName: \"kubernetes.io/projected/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-kube-api-access-mvghx\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.771311 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.835254 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.872788 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvghx\" (UniqueName: \"kubernetes.io/projected/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-kube-api-access-mvghx\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.872853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.874095 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:15 crc kubenswrapper[5110]: I0130 00:24:15.921988 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvghx\" (UniqueName: \"kubernetes.io/projected/8653dcb1-b9d6-4b22-b7ba-0c91d408836a-kube-api-access-mvghx\") pod \"perses-operator-669c9f96b5-88w5t\" (UID: \"8653dcb1-b9d6-4b22-b7ba-0c91d408836a\") " pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.020553 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.105026 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx"] Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.170306 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs"] Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.185907 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6"] Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.461944 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" event={"ID":"40f000fa-a4e9-4f45-a846-707d5b5b1643","Type":"ContainerStarted","Data":"a820aabd1e390a11333005a7dc77fe08c4a6408cdd027068f3f66554f4d8b2a2"} Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.464016 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" event={"ID":"ee77ccd5-299e-47f9-ba9b-26e406040a34","Type":"ContainerStarted","Data":"0635c51e75bc85132442cba85db52c1d32940bf1659b601e35f98901e15e1e5c"} Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.465430 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" event={"ID":"c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995","Type":"ContainerStarted","Data":"f7fb5790606a6db6875f17930acdd89ec61490a9d3b5f5aa0ad0001684f5c1e9"} Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.503609 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4zx86"] Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.552601 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-88w5t"] Jan 30 00:24:16 crc kubenswrapper[5110]: W0130 00:24:16.562082 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8653dcb1_b9d6_4b22_b7ba_0c91d408836a.slice/crio-31796e26be7a91e2356992018e8759e8649eb0a633ab6c81b7a61c1caa244fe9 WatchSource:0}: Error finding container 31796e26be7a91e2356992018e8759e8649eb0a633ab6c81b7a61c1caa244fe9: Status 404 returned error can't find the container with id 31796e26be7a91e2356992018e8759e8649eb0a633ab6c81b7a61c1caa244fe9 Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.743497 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.788847 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util\") pod \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.789438 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle\") pod \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.789520 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jdj\" (UniqueName: \"kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj\") pod \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\" (UID: \"4b38b889-3a80-4a92-ac57-00460c3dc1e6\") " Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.790549 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle" (OuterVolumeSpecName: "bundle") pod "4b38b889-3a80-4a92-ac57-00460c3dc1e6" (UID: "4b38b889-3a80-4a92-ac57-00460c3dc1e6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.799144 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util" (OuterVolumeSpecName: "util") pod "4b38b889-3a80-4a92-ac57-00460c3dc1e6" (UID: "4b38b889-3a80-4a92-ac57-00460c3dc1e6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.809924 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj" (OuterVolumeSpecName: "kube-api-access-v2jdj") pod "4b38b889-3a80-4a92-ac57-00460c3dc1e6" (UID: "4b38b889-3a80-4a92-ac57-00460c3dc1e6"). InnerVolumeSpecName "kube-api-access-v2jdj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.891134 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2jdj\" (UniqueName: \"kubernetes.io/projected/4b38b889-3a80-4a92-ac57-00460c3dc1e6-kube-api-access-v2jdj\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.904533 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:16 crc kubenswrapper[5110]: I0130 00:24:16.904578 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b38b889-3a80-4a92-ac57-00460c3dc1e6-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:17 crc kubenswrapper[5110]: E0130 00:24:17.122922 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:17 crc kubenswrapper[5110]: E0130 00:24:17.123266 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:17 crc kubenswrapper[5110]: E0130 00:24:17.124445 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:24:17 crc kubenswrapper[5110]: I0130 00:24:17.477121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" event={"ID":"8653dcb1-b9d6-4b22-b7ba-0c91d408836a","Type":"ContainerStarted","Data":"31796e26be7a91e2356992018e8759e8649eb0a633ab6c81b7a61c1caa244fe9"} Jan 30 00:24:17 crc kubenswrapper[5110]: I0130 00:24:17.480197 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-4zx86" event={"ID":"897cae54-4f71-44e7-a9ee-1ef4558e0432","Type":"ContainerStarted","Data":"c5d1ed70c9351e19e6791767235a5962d24e4fe6c74fd9b135598ec5585bcc98"} Jan 30 00:24:17 crc kubenswrapper[5110]: I0130 00:24:17.498748 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" event={"ID":"4b38b889-3a80-4a92-ac57-00460c3dc1e6","Type":"ContainerDied","Data":"9526266df6196be57722d02317ea545c78fe7f3828e6347db0f968fdeedde081"} Jan 30 00:24:17 crc kubenswrapper[5110]: I0130 00:24:17.498802 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9526266df6196be57722d02317ea545c78fe7f3828e6347db0f968fdeedde081" Jan 30 00:24:17 crc kubenswrapper[5110]: I0130 00:24:17.498952 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.031393 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc"] Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032439 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="extract" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032453 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="extract" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032472 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="pull" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032479 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="pull" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032490 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="util" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032496 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="util" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.032585 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4b38b889-3a80-4a92-ac57-00460c3dc1e6" containerName="extract" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.062795 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc"] Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.062990 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.065061 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.068137 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-pvsk2\"" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.068370 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.138123 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.138198 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lz2\" (UniqueName: \"kubernetes.io/projected/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-kube-api-access-x8lz2\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.240147 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lz2\" (UniqueName: \"kubernetes.io/projected/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-kube-api-access-x8lz2\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.240260 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.241234 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.265016 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lz2\" (UniqueName: \"kubernetes.io/projected/fdd01ab9-e21d-4a23-926f-bd0d47e362b3-kube-api-access-x8lz2\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-brvsc\" (UID: \"fdd01ab9-e21d-4a23-926f-bd0d47e362b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:23 crc kubenswrapper[5110]: I0130 00:24:23.384189 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" Jan 30 00:24:28 crc kubenswrapper[5110]: E0130 00:24:28.882237 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.414738 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc"] Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.651498 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" event={"ID":"ee77ccd5-299e-47f9-ba9b-26e406040a34","Type":"ContainerStarted","Data":"0690f0d48a10795caec5b90c0c35b8ef7d3d2940171cd0d2fe85e93ba59e2979"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.653238 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" event={"ID":"c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995","Type":"ContainerStarted","Data":"d1b17a5715128ca456a0ca07c0fd5d021a63ce9d1e2a6f83a8cc78703ee7505d"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.655054 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" event={"ID":"8653dcb1-b9d6-4b22-b7ba-0c91d408836a","Type":"ContainerStarted","Data":"d6380e0d9d544cbf206db540a1a8de2d53ea045a81893a266e836af4c2db48fe"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.655240 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.656632 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" event={"ID":"fdd01ab9-e21d-4a23-926f-bd0d47e362b3","Type":"ContainerStarted","Data":"15c7172bff36ab341a8ef7c520418a66ca27e459bd7686a2f573d6ce47fd7a12"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.658536 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-4zx86" event={"ID":"897cae54-4f71-44e7-a9ee-1ef4558e0432","Type":"ContainerStarted","Data":"80e9bdb399d90629ff8e60767e8fcddf0e5f7b143a33d65676a626fd9c65dd87"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.658782 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.659882 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" event={"ID":"40f000fa-a4e9-4f45-a846-707d5b5b1643","Type":"ContainerStarted","Data":"96f1482e171bf2de983c8d96693a1de8b432eb9ba82503866172d9a66cbe746d"} Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.661374 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-4zx86" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.679715 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx" podStartSLOduration=1.810660624 podStartE2EDuration="18.679692925s" podCreationTimestamp="2026-01-30 00:24:15 +0000 UTC" firstStartedPulling="2026-01-30 00:24:16.127499316 +0000 UTC m=+718.085735435" lastFinishedPulling="2026-01-30 00:24:32.996531607 +0000 UTC m=+734.954767736" observedRunningTime="2026-01-30 00:24:33.678444403 +0000 UTC m=+735.636680572" watchObservedRunningTime="2026-01-30 00:24:33.679692925 +0000 UTC m=+735.637929054" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.716859 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" podStartSLOduration=2.294763535 podStartE2EDuration="18.716840879s" podCreationTimestamp="2026-01-30 00:24:15 +0000 UTC" firstStartedPulling="2026-01-30 00:24:16.565121303 +0000 UTC m=+718.523357432" lastFinishedPulling="2026-01-30 00:24:32.987198647 +0000 UTC m=+734.945434776" observedRunningTime="2026-01-30 00:24:33.71180452 +0000 UTC m=+735.670040669" watchObservedRunningTime="2026-01-30 00:24:33.716840879 +0000 UTC m=+735.675077008" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.736203 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-j6qvs" podStartSLOduration=1.923103124 podStartE2EDuration="18.736187687s" podCreationTimestamp="2026-01-30 00:24:15 +0000 UTC" firstStartedPulling="2026-01-30 00:24:16.184113071 +0000 UTC m=+718.142349190" lastFinishedPulling="2026-01-30 00:24:32.997197624 +0000 UTC m=+734.955433753" observedRunningTime="2026-01-30 00:24:33.733360684 +0000 UTC m=+735.691596813" watchObservedRunningTime="2026-01-30 00:24:33.736187687 +0000 UTC m=+735.694423816" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.759912 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-4zx86" podStartSLOduration=2.2664602289999998 podStartE2EDuration="18.759897356s" podCreationTimestamp="2026-01-30 00:24:15 +0000 UTC" firstStartedPulling="2026-01-30 00:24:16.517619773 +0000 UTC m=+718.475855902" lastFinishedPulling="2026-01-30 00:24:33.0110569 +0000 UTC m=+734.969293029" observedRunningTime="2026-01-30 00:24:33.757826723 +0000 UTC m=+735.716062852" watchObservedRunningTime="2026-01-30 00:24:33.759897356 +0000 UTC m=+735.718133475" Jan 30 00:24:33 crc kubenswrapper[5110]: I0130 00:24:33.815681 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6" podStartSLOduration=2.060042553 podStartE2EDuration="18.815652719s" podCreationTimestamp="2026-01-30 00:24:15 +0000 UTC" firstStartedPulling="2026-01-30 00:24:16.212814748 +0000 UTC m=+718.171050877" lastFinishedPulling="2026-01-30 00:24:32.968424914 +0000 UTC m=+734.926661043" observedRunningTime="2026-01-30 00:24:33.802046369 +0000 UTC m=+735.760282498" watchObservedRunningTime="2026-01-30 00:24:33.815652719 +0000 UTC m=+735.773888848" Jan 30 00:24:37 crc kubenswrapper[5110]: I0130 00:24:37.691141 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" event={"ID":"fdd01ab9-e21d-4a23-926f-bd0d47e362b3","Type":"ContainerStarted","Data":"da91ec70cba66821defee5fa060ecc5dc077c3782ee18f415947af4d2d4594b0"} Jan 30 00:24:37 crc kubenswrapper[5110]: I0130 00:24:37.714969 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-brvsc" podStartSLOduration=10.63047301 podStartE2EDuration="14.714948337s" podCreationTimestamp="2026-01-30 00:24:23 +0000 UTC" firstStartedPulling="2026-01-30 00:24:33.429043643 +0000 UTC m=+735.387279772" lastFinishedPulling="2026-01-30 00:24:37.51351897 +0000 UTC m=+739.471755099" observedRunningTime="2026-01-30 00:24:37.712510355 +0000 UTC m=+739.670746494" watchObservedRunningTime="2026-01-30 00:24:37.714948337 +0000 UTC m=+739.673184476" Jan 30 00:24:42 crc kubenswrapper[5110]: E0130 00:24:42.109888 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:42 crc kubenswrapper[5110]: E0130 00:24:42.110498 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:42 crc kubenswrapper[5110]: E0130 00:24:42.112312 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.295501 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-tkvnv"] Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.303002 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.305862 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.306617 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-rkjmf\"" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.306882 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.310457 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-tkvnv"] Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.332754 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz8d\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-kube-api-access-2pz8d\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.332822 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.433695 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.433908 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz8d\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-kube-api-access-2pz8d\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.461681 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.479400 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz8d\" (UniqueName: \"kubernetes.io/projected/39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd-kube-api-access-2pz8d\") pod \"cert-manager-webhook-597b96b99b-tkvnv\" (UID: \"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd\") " pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.626753 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:42 crc kubenswrapper[5110]: I0130 00:24:42.857456 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-tkvnv"] Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.272697 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-d45t8"] Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.284982 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.288310 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-qfwzx\"" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.348235 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.348304 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwnr\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-kube-api-access-fwwnr\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.369052 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-d45t8"] Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.449950 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.450017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwnr\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-kube-api-access-fwwnr\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.470752 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.471941 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwnr\" (UniqueName: \"kubernetes.io/projected/36a64d1a-1715-4cf4-9f9a-f0641dee2884-kube-api-access-fwwnr\") pod \"cert-manager-cainjector-8966b78d4-d45t8\" (UID: \"36a64d1a-1715-4cf4-9f9a-f0641dee2884\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.607195 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" Jan 30 00:24:43 crc kubenswrapper[5110]: I0130 00:24:43.785612 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" event={"ID":"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd","Type":"ContainerStarted","Data":"ec6b25f466ba156c40a093bc2210f4b718ffbd9a29342a877199fada1d3db070"} Jan 30 00:24:44 crc kubenswrapper[5110]: I0130 00:24:44.127729 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-d45t8"] Jan 30 00:24:44 crc kubenswrapper[5110]: W0130 00:24:44.138942 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36a64d1a_1715_4cf4_9f9a_f0641dee2884.slice/crio-0b7a6e393c6ff538b2956ea870aa9f839576805855903b034642274b96fb0d97 WatchSource:0}: Error finding container 0b7a6e393c6ff538b2956ea870aa9f839576805855903b034642274b96fb0d97: Status 404 returned error can't find the container with id 0b7a6e393c6ff538b2956ea870aa9f839576805855903b034642274b96fb0d97 Jan 30 00:24:44 crc kubenswrapper[5110]: I0130 00:24:44.673393 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-88w5t" Jan 30 00:24:44 crc kubenswrapper[5110]: I0130 00:24:44.795661 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" event={"ID":"36a64d1a-1715-4cf4-9f9a-f0641dee2884","Type":"ContainerStarted","Data":"0b7a6e393c6ff538b2956ea870aa9f839576805855903b034642274b96fb0d97"} Jan 30 00:24:49 crc kubenswrapper[5110]: I0130 00:24:49.837408 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" event={"ID":"39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd","Type":"ContainerStarted","Data":"f84b55e5fcb859e579efacea5ac63533f16eec55241ebbacf9fe8f2ed59ed4e5"} Jan 30 00:24:49 crc kubenswrapper[5110]: I0130 00:24:49.838360 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:49 crc kubenswrapper[5110]: I0130 00:24:49.841095 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" event={"ID":"36a64d1a-1715-4cf4-9f9a-f0641dee2884","Type":"ContainerStarted","Data":"d7224c734bdbf5c7a843e248f5f7f47004e429e0e946b65fa583d3901c737707"} Jan 30 00:24:49 crc kubenswrapper[5110]: I0130 00:24:49.870699 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" podStartSLOduration=1.966488483 podStartE2EDuration="7.870669999s" podCreationTimestamp="2026-01-30 00:24:42 +0000 UTC" firstStartedPulling="2026-01-30 00:24:42.870011171 +0000 UTC m=+744.828247300" lastFinishedPulling="2026-01-30 00:24:48.774192657 +0000 UTC m=+750.732428816" observedRunningTime="2026-01-30 00:24:49.862506859 +0000 UTC m=+751.820743108" watchObservedRunningTime="2026-01-30 00:24:49.870669999 +0000 UTC m=+751.828906168" Jan 30 00:24:50 crc kubenswrapper[5110]: I0130 00:24:50.205029 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-d45t8" podStartSLOduration=2.53206647 podStartE2EDuration="7.204944141s" podCreationTimestamp="2026-01-30 00:24:43 +0000 UTC" firstStartedPulling="2026-01-30 00:24:44.142875095 +0000 UTC m=+746.101111224" lastFinishedPulling="2026-01-30 00:24:48.815752766 +0000 UTC m=+750.773988895" observedRunningTime="2026-01-30 00:24:50.186859086 +0000 UTC m=+752.145095275" watchObservedRunningTime="2026-01-30 00:24:50.204944141 +0000 UTC m=+752.163180360" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.449220 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-h6lwn"] Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.494456 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-h6lwn"] Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.494776 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.498179 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-r26g5\"" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.601259 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-bound-sa-token\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.601323 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qp6s\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-kube-api-access-9qp6s\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.702622 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qp6s\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-kube-api-access-9qp6s\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.702862 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-bound-sa-token\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.734948 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-bound-sa-token\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.737598 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qp6s\" (UniqueName: \"kubernetes.io/projected/5ea0e2d8-954b-46b8-b3e6-97e539a11a36-kube-api-access-9qp6s\") pod \"cert-manager-759f64656b-h6lwn\" (UID: \"5ea0e2d8-954b-46b8-b3e6-97e539a11a36\") " pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:52 crc kubenswrapper[5110]: I0130 00:24:52.831948 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-h6lwn" Jan 30 00:24:53 crc kubenswrapper[5110]: I0130 00:24:53.155846 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-h6lwn"] Jan 30 00:24:53 crc kubenswrapper[5110]: W0130 00:24:53.159910 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea0e2d8_954b_46b8_b3e6_97e539a11a36.slice/crio-6c4a0f9af196f62e38349bef466243df54bf49d9e45de60beee31d1645e8b76e WatchSource:0}: Error finding container 6c4a0f9af196f62e38349bef466243df54bf49d9e45de60beee31d1645e8b76e: Status 404 returned error can't find the container with id 6c4a0f9af196f62e38349bef466243df54bf49d9e45de60beee31d1645e8b76e Jan 30 00:24:53 crc kubenswrapper[5110]: I0130 00:24:53.877277 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-h6lwn" event={"ID":"5ea0e2d8-954b-46b8-b3e6-97e539a11a36","Type":"ContainerStarted","Data":"7e6731702e917d456b57c9739178c53a99a53721da6433e373020767c7181bd6"} Jan 30 00:24:53 crc kubenswrapper[5110]: I0130 00:24:53.878030 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-h6lwn" event={"ID":"5ea0e2d8-954b-46b8-b3e6-97e539a11a36","Type":"ContainerStarted","Data":"6c4a0f9af196f62e38349bef466243df54bf49d9e45de60beee31d1645e8b76e"} Jan 30 00:24:53 crc kubenswrapper[5110]: I0130 00:24:53.903897 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-h6lwn" podStartSLOduration=1.903854889 podStartE2EDuration="1.903854889s" podCreationTimestamp="2026-01-30 00:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:24:53.903250373 +0000 UTC m=+755.861486532" watchObservedRunningTime="2026-01-30 00:24:53.903854889 +0000 UTC m=+755.862091048" Jan 30 00:24:55 crc kubenswrapper[5110]: I0130 00:24:55.853156 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-tkvnv" Jan 30 00:24:55 crc kubenswrapper[5110]: E0130 00:24:55.875296 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:25:08 crc kubenswrapper[5110]: E0130 00:25:08.888269 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:25:09 crc kubenswrapper[5110]: I0130 00:25:09.210667 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:09 crc kubenswrapper[5110]: I0130 00:25:09.210807 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:25:24 crc kubenswrapper[5110]: E0130 00:25:24.115125 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:24 crc kubenswrapper[5110]: E0130 00:25:24.116275 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:24 crc kubenswrapper[5110]: E0130 00:25:24.118057 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:25:38 crc kubenswrapper[5110]: E0130 00:25:38.885424 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:25:39 crc kubenswrapper[5110]: I0130 00:25:39.211262 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:39 crc kubenswrapper[5110]: I0130 00:25:39.211412 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.322863 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.333427 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.344414 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.419281 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.419677 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgq9g\" (UniqueName: \"kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.419771 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.523594 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgq9g\" (UniqueName: \"kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.523686 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.523770 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.524632 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.524743 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.559614 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgq9g\" (UniqueName: \"kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g\") pod \"certified-operators-86t5v\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:45 crc kubenswrapper[5110]: I0130 00:25:45.659153 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:46 crc kubenswrapper[5110]: I0130 00:25:46.170975 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:46 crc kubenswrapper[5110]: I0130 00:25:46.418294 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerStarted","Data":"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7"} Jan 30 00:25:46 crc kubenswrapper[5110]: I0130 00:25:46.418891 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerStarted","Data":"2eaafa4711fb427efa977b2f27878cdfdb6ddd583046372ab4e56c2393b0892a"} Jan 30 00:25:47 crc kubenswrapper[5110]: I0130 00:25:47.431029 5110 generic.go:358] "Generic (PLEG): container finished" podID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerID="7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7" exitCode=0 Jan 30 00:25:47 crc kubenswrapper[5110]: I0130 00:25:47.431192 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerDied","Data":"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7"} Jan 30 00:25:48 crc kubenswrapper[5110]: E0130 00:25:48.797291 5110 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84b004_5770_4c2e_957d_73d2c8ed8f38.slice/crio-conmon-a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0.scope\": RecentStats: unable to find data in memory cache]" Jan 30 00:25:49 crc kubenswrapper[5110]: I0130 00:25:49.454028 5110 generic.go:358] "Generic (PLEG): container finished" podID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerID="a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0" exitCode=0 Jan 30 00:25:49 crc kubenswrapper[5110]: I0130 00:25:49.454150 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerDied","Data":"a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0"} Jan 30 00:25:50 crc kubenswrapper[5110]: I0130 00:25:50.466854 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerStarted","Data":"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921"} Jan 30 00:25:50 crc kubenswrapper[5110]: I0130 00:25:50.500074 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86t5v" podStartSLOduration=4.627994872 podStartE2EDuration="5.500048637s" podCreationTimestamp="2026-01-30 00:25:45 +0000 UTC" firstStartedPulling="2026-01-30 00:25:47.433084706 +0000 UTC m=+809.391320875" lastFinishedPulling="2026-01-30 00:25:48.305138501 +0000 UTC m=+810.263374640" observedRunningTime="2026-01-30 00:25:50.493386655 +0000 UTC m=+812.451622814" watchObservedRunningTime="2026-01-30 00:25:50.500048637 +0000 UTC m=+812.458284766" Jan 30 00:25:53 crc kubenswrapper[5110]: E0130 00:25:53.876019 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:25:55 crc kubenswrapper[5110]: I0130 00:25:55.659718 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:55 crc kubenswrapper[5110]: I0130 00:25:55.660503 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:55 crc kubenswrapper[5110]: I0130 00:25:55.734772 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:56 crc kubenswrapper[5110]: I0130 00:25:56.586556 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:56 crc kubenswrapper[5110]: I0130 00:25:56.662848 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:58 crc kubenswrapper[5110]: I0130 00:25:58.538821 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86t5v" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="registry-server" containerID="cri-o://c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921" gracePeriod=2 Jan 30 00:25:58 crc kubenswrapper[5110]: I0130 00:25:58.998038 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.054468 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgq9g\" (UniqueName: \"kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g\") pod \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.054765 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content\") pod \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.054856 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities\") pod \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\" (UID: \"eb84b004-5770-4c2e-957d-73d2c8ed8f38\") " Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.056459 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities" (OuterVolumeSpecName: "utilities") pod "eb84b004-5770-4c2e-957d-73d2c8ed8f38" (UID: "eb84b004-5770-4c2e-957d-73d2c8ed8f38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.064961 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g" (OuterVolumeSpecName: "kube-api-access-xgq9g") pod "eb84b004-5770-4c2e-957d-73d2c8ed8f38" (UID: "eb84b004-5770-4c2e-957d-73d2c8ed8f38"). InnerVolumeSpecName "kube-api-access-xgq9g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.119601 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb84b004-5770-4c2e-957d-73d2c8ed8f38" (UID: "eb84b004-5770-4c2e-957d-73d2c8ed8f38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.157191 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.157420 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb84b004-5770-4c2e-957d-73d2c8ed8f38-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.157447 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgq9g\" (UniqueName: \"kubernetes.io/projected/eb84b004-5770-4c2e-957d-73d2c8ed8f38-kube-api-access-xgq9g\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.555964 5110 generic.go:358] "Generic (PLEG): container finished" podID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerID="c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921" exitCode=0 Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.556077 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerDied","Data":"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921"} Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.556156 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86t5v" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.556188 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86t5v" event={"ID":"eb84b004-5770-4c2e-957d-73d2c8ed8f38","Type":"ContainerDied","Data":"2eaafa4711fb427efa977b2f27878cdfdb6ddd583046372ab4e56c2393b0892a"} Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.556239 5110 scope.go:117] "RemoveContainer" containerID="c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.605404 5110 scope.go:117] "RemoveContainer" containerID="a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.631759 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.643947 5110 scope.go:117] "RemoveContainer" containerID="7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.644827 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86t5v"] Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.669541 5110 scope.go:117] "RemoveContainer" containerID="c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921" Jan 30 00:25:59 crc kubenswrapper[5110]: E0130 00:25:59.670226 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921\": container with ID starting with c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921 not found: ID does not exist" containerID="c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.670286 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921"} err="failed to get container status \"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921\": rpc error: code = NotFound desc = could not find container \"c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921\": container with ID starting with c337ea07aa5b6757230af8d050b55f0153106eafdfd80c4f6ed01da143464921 not found: ID does not exist" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.670322 5110 scope.go:117] "RemoveContainer" containerID="a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0" Jan 30 00:25:59 crc kubenswrapper[5110]: E0130 00:25:59.670981 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0\": container with ID starting with a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0 not found: ID does not exist" containerID="a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.671053 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0"} err="failed to get container status \"a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0\": rpc error: code = NotFound desc = could not find container \"a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0\": container with ID starting with a3259f74b3fdf7c84712d2d03b53e69ebe5ad876b2d5e8a25592225580ff0ae0 not found: ID does not exist" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.671098 5110 scope.go:117] "RemoveContainer" containerID="7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7" Jan 30 00:25:59 crc kubenswrapper[5110]: E0130 00:25:59.671550 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7\": container with ID starting with 7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7 not found: ID does not exist" containerID="7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7" Jan 30 00:25:59 crc kubenswrapper[5110]: I0130 00:25:59.671595 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7"} err="failed to get container status \"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7\": rpc error: code = NotFound desc = could not find container \"7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7\": container with ID starting with 7ee6b98380252cad56bd344e70af9633237df5f504cc575a0058e1a46e8d0bc7 not found: ID does not exist" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.155147 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zvgln"] Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156283 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="extract-content" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156306 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="extract-content" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156355 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156364 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156381 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="extract-utilities" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156388 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="extract-utilities" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.156499 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.178311 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zvgln"] Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.178530 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.183254 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.183537 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.183742 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.275242 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdj4\" (UniqueName: \"kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4\") pod \"auto-csr-approver-29495546-zvgln\" (UID: \"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f\") " pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.376912 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gvdj4\" (UniqueName: \"kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4\") pod \"auto-csr-approver-29495546-zvgln\" (UID: \"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f\") " pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.409804 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvdj4\" (UniqueName: \"kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4\") pod \"auto-csr-approver-29495546-zvgln\" (UID: \"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f\") " pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.497975 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.761471 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zvgln"] Jan 30 00:26:00 crc kubenswrapper[5110]: I0130 00:26:00.885450 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb84b004-5770-4c2e-957d-73d2c8ed8f38" path="/var/lib/kubelet/pods/eb84b004-5770-4c2e-957d-73d2c8ed8f38/volumes" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.411377 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.426868 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.427093 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.500597 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.500697 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljkc\" (UniqueName: \"kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.500958 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.579619 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zvgln" event={"ID":"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f","Type":"ContainerStarted","Data":"c978abdddd6e8bc6575051b982269a2ebfe0428195459772b7d0822acb516ea0"} Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.602741 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.602976 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qljkc\" (UniqueName: \"kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.603196 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.603206 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.603871 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.636727 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljkc\" (UniqueName: \"kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc\") pod \"redhat-operators-8lwt7\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:01 crc kubenswrapper[5110]: I0130 00:26:01.758170 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.039918 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:02 crc kubenswrapper[5110]: W0130 00:26:02.079151 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce924857_5d62_477d_92a0_952361a0a5e5.slice/crio-624ff4457f08bd7632be78386e9a2d9caf8d5e1021c32f2da59fab0c06b82605 WatchSource:0}: Error finding container 624ff4457f08bd7632be78386e9a2d9caf8d5e1021c32f2da59fab0c06b82605: Status 404 returned error can't find the container with id 624ff4457f08bd7632be78386e9a2d9caf8d5e1021c32f2da59fab0c06b82605 Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.589107 5110 generic.go:358] "Generic (PLEG): container finished" podID="ce924857-5d62-477d-92a0-952361a0a5e5" containerID="a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400" exitCode=0 Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.589281 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerDied","Data":"a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400"} Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.589388 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerStarted","Data":"624ff4457f08bd7632be78386e9a2d9caf8d5e1021c32f2da59fab0c06b82605"} Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.592587 5110 generic.go:358] "Generic (PLEG): container finished" podID="0d96fc8a-0f4a-4be8-91a9-505be2b71c3f" containerID="c4a539d0d322a2880330620b51bce52929cfc72abb47975abf0ce1d11de44acc" exitCode=0 Jan 30 00:26:02 crc kubenswrapper[5110]: I0130 00:26:02.592726 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zvgln" event={"ID":"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f","Type":"ContainerDied","Data":"c4a539d0d322a2880330620b51bce52929cfc72abb47975abf0ce1d11de44acc"} Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.589975 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.595486 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.602562 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerStarted","Data":"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8"} Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.623479 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.754511 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzzs\" (UniqueName: \"kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.754607 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.754761 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.855998 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.856092 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzzs\" (UniqueName: \"kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.856122 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.856657 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.857003 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.878857 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzzs\" (UniqueName: \"kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs\") pod \"community-operators-pfwgp\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.901252 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:03 crc kubenswrapper[5110]: I0130 00:26:03.955670 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.058720 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvdj4\" (UniqueName: \"kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4\") pod \"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f\" (UID: \"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f\") " Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.068066 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4" (OuterVolumeSpecName: "kube-api-access-gvdj4") pod "0d96fc8a-0f4a-4be8-91a9-505be2b71c3f" (UID: "0d96fc8a-0f4a-4be8-91a9-505be2b71c3f"). InnerVolumeSpecName "kube-api-access-gvdj4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.160351 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvdj4\" (UniqueName: \"kubernetes.io/projected/0d96fc8a-0f4a-4be8-91a9-505be2b71c3f-kube-api-access-gvdj4\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.252478 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:04 crc kubenswrapper[5110]: W0130 00:26:04.285529 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd238869b_4c17_4884_8eff_86a9367afc03.slice/crio-cd629ed26ee2a5081e3df97b370c47b72cffbb7fb5d92b22de2c7b3df5fecf3a WatchSource:0}: Error finding container cd629ed26ee2a5081e3df97b370c47b72cffbb7fb5d92b22de2c7b3df5fecf3a: Status 404 returned error can't find the container with id cd629ed26ee2a5081e3df97b370c47b72cffbb7fb5d92b22de2c7b3df5fecf3a Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.634673 5110 generic.go:358] "Generic (PLEG): container finished" podID="d238869b-4c17-4884-8eff-86a9367afc03" containerID="01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a" exitCode=0 Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.634762 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerDied","Data":"01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a"} Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.635415 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerStarted","Data":"cd629ed26ee2a5081e3df97b370c47b72cffbb7fb5d92b22de2c7b3df5fecf3a"} Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.638761 5110 generic.go:358] "Generic (PLEG): container finished" podID="ce924857-5d62-477d-92a0-952361a0a5e5" containerID="2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8" exitCode=0 Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.638916 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerDied","Data":"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8"} Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.667609 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zvgln" event={"ID":"0d96fc8a-0f4a-4be8-91a9-505be2b71c3f","Type":"ContainerDied","Data":"c978abdddd6e8bc6575051b982269a2ebfe0428195459772b7d0822acb516ea0"} Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.667672 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c978abdddd6e8bc6575051b982269a2ebfe0428195459772b7d0822acb516ea0" Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.667787 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zvgln" Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.981819 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-hvbqm"] Jan 30 00:26:04 crc kubenswrapper[5110]: I0130 00:26:04.989913 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-hvbqm"] Jan 30 00:26:05 crc kubenswrapper[5110]: I0130 00:26:05.687394 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerStarted","Data":"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64"} Jan 30 00:26:05 crc kubenswrapper[5110]: I0130 00:26:05.725960 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8lwt7" podStartSLOduration=4.035681911 podStartE2EDuration="4.725928722s" podCreationTimestamp="2026-01-30 00:26:01 +0000 UTC" firstStartedPulling="2026-01-30 00:26:02.59045042 +0000 UTC m=+824.548686549" lastFinishedPulling="2026-01-30 00:26:03.280697191 +0000 UTC m=+825.238933360" observedRunningTime="2026-01-30 00:26:05.717812142 +0000 UTC m=+827.676048281" watchObservedRunningTime="2026-01-30 00:26:05.725928722 +0000 UTC m=+827.684164881" Jan 30 00:26:06 crc kubenswrapper[5110]: I0130 00:26:06.702114 5110 generic.go:358] "Generic (PLEG): container finished" podID="d238869b-4c17-4884-8eff-86a9367afc03" containerID="cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab" exitCode=0 Jan 30 00:26:06 crc kubenswrapper[5110]: I0130 00:26:06.702236 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerDied","Data":"cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab"} Jan 30 00:26:06 crc kubenswrapper[5110]: I0130 00:26:06.888488 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc630c8-ed9b-48ed-b521-ee1b36e22c0a" path="/var/lib/kubelet/pods/fcc630c8-ed9b-48ed-b521-ee1b36e22c0a/volumes" Jan 30 00:26:07 crc kubenswrapper[5110]: I0130 00:26:07.715409 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerStarted","Data":"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716"} Jan 30 00:26:07 crc kubenswrapper[5110]: I0130 00:26:07.750889 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pfwgp" podStartSLOduration=3.848508104 podStartE2EDuration="4.750865032s" podCreationTimestamp="2026-01-30 00:26:03 +0000 UTC" firstStartedPulling="2026-01-30 00:26:04.637908782 +0000 UTC m=+826.596144921" lastFinishedPulling="2026-01-30 00:26:05.5402657 +0000 UTC m=+827.498501849" observedRunningTime="2026-01-30 00:26:07.742969628 +0000 UTC m=+829.701205797" watchObservedRunningTime="2026-01-30 00:26:07.750865032 +0000 UTC m=+829.709101201" Jan 30 00:26:07 crc kubenswrapper[5110]: E0130 00:26:07.874153 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.211489 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.211655 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.211778 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.214280 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e"} pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.214437 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" containerID="cri-o://16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e" gracePeriod=600 Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.734521 5110 generic.go:358] "Generic (PLEG): container finished" podID="97dc714a-5d84-4c81-99ef-13067437fcad" containerID="16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e" exitCode=0 Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.734652 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerDied","Data":"16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e"} Jan 30 00:26:09 crc kubenswrapper[5110]: I0130 00:26:09.735478 5110 scope.go:117] "RemoveContainer" containerID="373bedbf4f4713b59db4d20107b7ddf7abd7e4d8fdb5905eb80b15e17e28f76f" Jan 30 00:26:10 crc kubenswrapper[5110]: I0130 00:26:10.747985 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"7f93629bc351ccac22c13766129bfc4f8d907ab3ad7962fcdf0fb53756bef6d5"} Jan 30 00:26:11 crc kubenswrapper[5110]: I0130 00:26:11.758622 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:11 crc kubenswrapper[5110]: I0130 00:26:11.758688 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:12 crc kubenswrapper[5110]: I0130 00:26:12.828162 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lwt7" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="registry-server" probeResult="failure" output=< Jan 30 00:26:12 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 30 00:26:12 crc kubenswrapper[5110]: > Jan 30 00:26:13 crc kubenswrapper[5110]: I0130 00:26:13.956130 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:13 crc kubenswrapper[5110]: I0130 00:26:13.956981 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:14 crc kubenswrapper[5110]: I0130 00:26:14.006783 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:14 crc kubenswrapper[5110]: I0130 00:26:14.857517 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:14 crc kubenswrapper[5110]: I0130 00:26:14.946458 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:16 crc kubenswrapper[5110]: I0130 00:26:16.816427 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pfwgp" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="registry-server" containerID="cri-o://e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716" gracePeriod=2 Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.203774 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.216268 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzzs\" (UniqueName: \"kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs\") pod \"d238869b-4c17-4884-8eff-86a9367afc03\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.216431 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities\") pod \"d238869b-4c17-4884-8eff-86a9367afc03\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.216565 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content\") pod \"d238869b-4c17-4884-8eff-86a9367afc03\" (UID: \"d238869b-4c17-4884-8eff-86a9367afc03\") " Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.218302 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities" (OuterVolumeSpecName: "utilities") pod "d238869b-4c17-4884-8eff-86a9367afc03" (UID: "d238869b-4c17-4884-8eff-86a9367afc03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.243553 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs" (OuterVolumeSpecName: "kube-api-access-fkzzs") pod "d238869b-4c17-4884-8eff-86a9367afc03" (UID: "d238869b-4c17-4884-8eff-86a9367afc03"). InnerVolumeSpecName "kube-api-access-fkzzs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.307350 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d238869b-4c17-4884-8eff-86a9367afc03" (UID: "d238869b-4c17-4884-8eff-86a9367afc03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.318033 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fkzzs\" (UniqueName: \"kubernetes.io/projected/d238869b-4c17-4884-8eff-86a9367afc03-kube-api-access-fkzzs\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.318068 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.318078 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d238869b-4c17-4884-8eff-86a9367afc03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.885263 5110 generic.go:358] "Generic (PLEG): container finished" podID="d238869b-4c17-4884-8eff-86a9367afc03" containerID="e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716" exitCode=0 Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.885374 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfwgp" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.885398 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerDied","Data":"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716"} Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.887624 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfwgp" event={"ID":"d238869b-4c17-4884-8eff-86a9367afc03","Type":"ContainerDied","Data":"cd629ed26ee2a5081e3df97b370c47b72cffbb7fb5d92b22de2c7b3df5fecf3a"} Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.887669 5110 scope.go:117] "RemoveContainer" containerID="e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.906373 5110 scope.go:117] "RemoveContainer" containerID="cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.921859 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.930763 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pfwgp"] Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.938278 5110 scope.go:117] "RemoveContainer" containerID="01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.961776 5110 scope.go:117] "RemoveContainer" containerID="e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716" Jan 30 00:26:17 crc kubenswrapper[5110]: E0130 00:26:17.962482 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716\": container with ID starting with e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716 not found: ID does not exist" containerID="e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.962612 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716"} err="failed to get container status \"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716\": rpc error: code = NotFound desc = could not find container \"e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716\": container with ID starting with e13f18ed8ce7ce9dd7ef64eaa855f9b02846e5756b390e56259acd6baa372716 not found: ID does not exist" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.962711 5110 scope.go:117] "RemoveContainer" containerID="cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab" Jan 30 00:26:17 crc kubenswrapper[5110]: E0130 00:26:17.963306 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab\": container with ID starting with cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab not found: ID does not exist" containerID="cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.963493 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab"} err="failed to get container status \"cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab\": rpc error: code = NotFound desc = could not find container \"cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab\": container with ID starting with cd0aacc156a0fabfe2c5bb0b44a64866da299b1684443085ee6e217fd5491cab not found: ID does not exist" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.963585 5110 scope.go:117] "RemoveContainer" containerID="01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a" Jan 30 00:26:17 crc kubenswrapper[5110]: E0130 00:26:17.964449 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a\": container with ID starting with 01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a not found: ID does not exist" containerID="01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a" Jan 30 00:26:17 crc kubenswrapper[5110]: I0130 00:26:17.964536 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a"} err="failed to get container status \"01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a\": rpc error: code = NotFound desc = could not find container \"01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a\": container with ID starting with 01c84646f245c3ce22a45f59278e1871552f1e8d43537dd590ed517b614a1e8a not found: ID does not exist" Jan 30 00:26:18 crc kubenswrapper[5110]: I0130 00:26:18.891152 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d238869b-4c17-4884-8eff-86a9367afc03" path="/var/lib/kubelet/pods/d238869b-4c17-4884-8eff-86a9367afc03/volumes" Jan 30 00:26:19 crc kubenswrapper[5110]: I0130 00:26:19.553851 5110 scope.go:117] "RemoveContainer" containerID="a22ee6c105bf272ad06c09b9ef04ca89b3f8b94bf4da2a257a230755508241f4" Jan 30 00:26:19 crc kubenswrapper[5110]: E0130 00:26:19.874970 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:26:21 crc kubenswrapper[5110]: I0130 00:26:21.833903 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:21 crc kubenswrapper[5110]: I0130 00:26:21.911533 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:22 crc kubenswrapper[5110]: I0130 00:26:22.093798 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:22 crc kubenswrapper[5110]: I0130 00:26:22.928702 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8lwt7" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="registry-server" containerID="cri-o://cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64" gracePeriod=2 Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.435853 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.516834 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qljkc\" (UniqueName: \"kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc\") pod \"ce924857-5d62-477d-92a0-952361a0a5e5\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.516965 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content\") pod \"ce924857-5d62-477d-92a0-952361a0a5e5\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.516995 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities\") pod \"ce924857-5d62-477d-92a0-952361a0a5e5\" (UID: \"ce924857-5d62-477d-92a0-952361a0a5e5\") " Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.519172 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities" (OuterVolumeSpecName: "utilities") pod "ce924857-5d62-477d-92a0-952361a0a5e5" (UID: "ce924857-5d62-477d-92a0-952361a0a5e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.529977 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc" (OuterVolumeSpecName: "kube-api-access-qljkc") pod "ce924857-5d62-477d-92a0-952361a0a5e5" (UID: "ce924857-5d62-477d-92a0-952361a0a5e5"). InnerVolumeSpecName "kube-api-access-qljkc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.619775 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.620280 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qljkc\" (UniqueName: \"kubernetes.io/projected/ce924857-5d62-477d-92a0-952361a0a5e5-kube-api-access-qljkc\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.695739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce924857-5d62-477d-92a0-952361a0a5e5" (UID: "ce924857-5d62-477d-92a0-952361a0a5e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.722041 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce924857-5d62-477d-92a0-952361a0a5e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.940798 5110 generic.go:358] "Generic (PLEG): container finished" podID="ce924857-5d62-477d-92a0-952361a0a5e5" containerID="cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64" exitCode=0 Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.940883 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerDied","Data":"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64"} Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.940943 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lwt7" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.940962 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lwt7" event={"ID":"ce924857-5d62-477d-92a0-952361a0a5e5","Type":"ContainerDied","Data":"624ff4457f08bd7632be78386e9a2d9caf8d5e1021c32f2da59fab0c06b82605"} Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.941002 5110 scope.go:117] "RemoveContainer" containerID="cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64" Jan 30 00:26:23 crc kubenswrapper[5110]: I0130 00:26:23.976546 5110 scope.go:117] "RemoveContainer" containerID="2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:23.999983 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.009695 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8lwt7"] Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.026031 5110 scope.go:117] "RemoveContainer" containerID="a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.050145 5110 scope.go:117] "RemoveContainer" containerID="cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64" Jan 30 00:26:24 crc kubenswrapper[5110]: E0130 00:26:24.050622 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64\": container with ID starting with cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64 not found: ID does not exist" containerID="cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.050657 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64"} err="failed to get container status \"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64\": rpc error: code = NotFound desc = could not find container \"cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64\": container with ID starting with cc4f78c6499a6af68eb8f93fab0650ab4eb5d70a055300f296e170b1a4531e64 not found: ID does not exist" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.050681 5110 scope.go:117] "RemoveContainer" containerID="2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8" Jan 30 00:26:24 crc kubenswrapper[5110]: E0130 00:26:24.051065 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8\": container with ID starting with 2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8 not found: ID does not exist" containerID="2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.051153 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8"} err="failed to get container status \"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8\": rpc error: code = NotFound desc = could not find container \"2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8\": container with ID starting with 2d6534d6bd7feb8ca0ec3a7482486cd71b7df86d4d95ee5cf83009dfa6316fd8 not found: ID does not exist" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.051198 5110 scope.go:117] "RemoveContainer" containerID="a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400" Jan 30 00:26:24 crc kubenswrapper[5110]: E0130 00:26:24.051694 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400\": container with ID starting with a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400 not found: ID does not exist" containerID="a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.051743 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400"} err="failed to get container status \"a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400\": rpc error: code = NotFound desc = could not find container \"a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400\": container with ID starting with a9b2365ffe263d87f99a70da8444f524f8927f4c22dd0ed72200890feb692400 not found: ID does not exist" Jan 30 00:26:24 crc kubenswrapper[5110]: I0130 00:26:24.887727 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" path="/var/lib/kubelet/pods/ce924857-5d62-477d-92a0-952361a0a5e5/volumes" Jan 30 00:26:31 crc kubenswrapper[5110]: E0130 00:26:31.887396 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:26:47 crc kubenswrapper[5110]: E0130 00:26:47.119484 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:26:47 crc kubenswrapper[5110]: E0130 00:26:47.120682 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:26:47 crc kubenswrapper[5110]: E0130 00:26:47.121987 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:27:01 crc kubenswrapper[5110]: E0130 00:27:01.877620 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.019615 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8r9xt/must-gather-5z7v4"] Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021224 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d96fc8a-0f4a-4be8-91a9-505be2b71c3f" containerName="oc" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021243 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d96fc8a-0f4a-4be8-91a9-505be2b71c3f" containerName="oc" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021259 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="extract-utilities" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021267 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="extract-utilities" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021280 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021288 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021300 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021307 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021371 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="extract-content" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021381 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="extract-content" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021394 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="extract-utilities" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021402 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="extract-utilities" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021411 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="extract-content" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021419 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="extract-content" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021543 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d96fc8a-0f4a-4be8-91a9-505be2b71c3f" containerName="oc" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021562 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d238869b-4c17-4884-8eff-86a9367afc03" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.021575 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce924857-5d62-477d-92a0-952361a0a5e5" containerName="registry-server" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.030081 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.033794 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8r9xt\"/\"openshift-service-ca.crt\"" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.036622 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8r9xt\"/\"kube-root-ca.crt\"" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.036986 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-8r9xt\"/\"default-dockercfg-vt8d9\"" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.041874 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r9xt/must-gather-5z7v4"] Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.142857 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.142920 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgsz8\" (UniqueName: \"kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.244599 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.244717 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgsz8\" (UniqueName: \"kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.245241 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.270698 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgsz8\" (UniqueName: \"kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8\") pod \"must-gather-5z7v4\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.360942 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:27:11 crc kubenswrapper[5110]: I0130 00:27:11.858734 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r9xt/must-gather-5z7v4"] Jan 30 00:27:12 crc kubenswrapper[5110]: I0130 00:27:12.388396 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" event={"ID":"97c1c191-5d38-49eb-9205-8631d974a301","Type":"ContainerStarted","Data":"65a185cb9cc020070239709a8b240530633f18cd7acf140dfc9d29550ff784fc"} Jan 30 00:27:14 crc kubenswrapper[5110]: E0130 00:27:14.885864 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:27:18 crc kubenswrapper[5110]: I0130 00:27:18.451613 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" event={"ID":"97c1c191-5d38-49eb-9205-8631d974a301","Type":"ContainerStarted","Data":"0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68"} Jan 30 00:27:18 crc kubenswrapper[5110]: I0130 00:27:18.453509 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" event={"ID":"97c1c191-5d38-49eb-9205-8631d974a301","Type":"ContainerStarted","Data":"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b"} Jan 30 00:27:18 crc kubenswrapper[5110]: I0130 00:27:18.483554 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" podStartSLOduration=2.669727814 podStartE2EDuration="8.483527336s" podCreationTimestamp="2026-01-30 00:27:10 +0000 UTC" firstStartedPulling="2026-01-30 00:27:11.875030471 +0000 UTC m=+893.833266610" lastFinishedPulling="2026-01-30 00:27:17.688829963 +0000 UTC m=+899.647066132" observedRunningTime="2026-01-30 00:27:18.478994009 +0000 UTC m=+900.437230178" watchObservedRunningTime="2026-01-30 00:27:18.483527336 +0000 UTC m=+900.441763505" Jan 30 00:27:19 crc kubenswrapper[5110]: I0130 00:27:19.326443 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v6j88_f47cb22d-f09e-43a7-95e0-0e1008827f08/kube-multus/0.log" Jan 30 00:27:19 crc kubenswrapper[5110]: I0130 00:27:19.326515 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v6j88_f47cb22d-f09e-43a7-95e0-0e1008827f08/kube-multus/0.log" Jan 30 00:27:19 crc kubenswrapper[5110]: I0130 00:27:19.348899 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:27:19 crc kubenswrapper[5110]: I0130 00:27:19.349087 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:27:27 crc kubenswrapper[5110]: E0130 00:27:27.879229 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:27:42 crc kubenswrapper[5110]: E0130 00:27:42.876208 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:27:55 crc kubenswrapper[5110]: I0130 00:27:55.875449 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:27:55 crc kubenswrapper[5110]: E0130 00:27:55.876530 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.140420 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495548-rcc6q"] Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.176518 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-rcc6q"] Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.176721 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.180395 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.180659 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.181513 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.265235 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdcn6\" (UniqueName: \"kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6\") pod \"auto-csr-approver-29495548-rcc6q\" (UID: \"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2\") " pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.367199 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sdcn6\" (UniqueName: \"kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6\") pod \"auto-csr-approver-29495548-rcc6q\" (UID: \"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2\") " pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.402974 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdcn6\" (UniqueName: \"kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6\") pod \"auto-csr-approver-29495548-rcc6q\" (UID: \"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2\") " pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:00 crc kubenswrapper[5110]: I0130 00:28:00.507684 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:01 crc kubenswrapper[5110]: I0130 00:28:01.069915 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-rcc6q"] Jan 30 00:28:01 crc kubenswrapper[5110]: I0130 00:28:01.828895 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" event={"ID":"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2","Type":"ContainerStarted","Data":"b9f778afa4e9a7fcb5fc2a15432f30924c6a703614b482c6011d4034fc592c35"} Jan 30 00:28:03 crc kubenswrapper[5110]: I0130 00:28:03.848966 5110 generic.go:358] "Generic (PLEG): container finished" podID="87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2" containerID="eb9c67adecf4eb1b15bc5c6f3670fa24a1ce9d44c2687e0b3f3f66ef9ab910bd" exitCode=0 Jan 30 00:28:03 crc kubenswrapper[5110]: I0130 00:28:03.849130 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" event={"ID":"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2","Type":"ContainerDied","Data":"eb9c67adecf4eb1b15bc5c6f3670fa24a1ce9d44c2687e0b3f3f66ef9ab910bd"} Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.195647 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.255973 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdcn6\" (UniqueName: \"kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6\") pod \"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2\" (UID: \"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2\") " Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.268623 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6" (OuterVolumeSpecName: "kube-api-access-sdcn6") pod "87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2" (UID: "87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2"). InnerVolumeSpecName "kube-api-access-sdcn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.357880 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sdcn6\" (UniqueName: \"kubernetes.io/projected/87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2-kube-api-access-sdcn6\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.872875 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.872925 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-rcc6q" event={"ID":"87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2","Type":"ContainerDied","Data":"b9f778afa4e9a7fcb5fc2a15432f30924c6a703614b482c6011d4034fc592c35"} Jan 30 00:28:05 crc kubenswrapper[5110]: I0130 00:28:05.872992 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f778afa4e9a7fcb5fc2a15432f30924c6a703614b482c6011d4034fc592c35" Jan 30 00:28:06 crc kubenswrapper[5110]: I0130 00:28:06.277436 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-fbwzv"] Jan 30 00:28:06 crc kubenswrapper[5110]: I0130 00:28:06.285657 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-fbwzv"] Jan 30 00:28:06 crc kubenswrapper[5110]: I0130 00:28:06.881786 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a638d71c-d5c9-4d33-95c9-a7d38717c4e9" path="/var/lib/kubelet/pods/a638d71c-d5c9-4d33-95c9-a7d38717c4e9/volumes" Jan 30 00:28:08 crc kubenswrapper[5110]: I0130 00:28:08.359950 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-tpzqq_327eaa18-356c-4a5b-a6e2-a6cea319d8cb/control-plane-machine-set-operator/0.log" Jan 30 00:28:08 crc kubenswrapper[5110]: I0130 00:28:08.462730 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-7vndv_ada32307-a77a-45ee-8310-40d64876b14c/kube-rbac-proxy/0.log" Jan 30 00:28:08 crc kubenswrapper[5110]: I0130 00:28:08.578365 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-7vndv_ada32307-a77a-45ee-8310-40d64876b14c/machine-api-operator/0.log" Jan 30 00:28:08 crc kubenswrapper[5110]: E0130 00:28:08.878542 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:28:19 crc kubenswrapper[5110]: I0130 00:28:19.767572 5110 scope.go:117] "RemoveContainer" containerID="b365cfe6de4833bcfa813a60d74349c92d0c8b57c7b0ced0b779276c97f2ae30" Jan 30 00:28:21 crc kubenswrapper[5110]: E0130 00:28:21.876993 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:28:23 crc kubenswrapper[5110]: I0130 00:28:23.903842 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-h6lwn_5ea0e2d8-954b-46b8-b3e6-97e539a11a36/cert-manager-controller/0.log" Jan 30 00:28:24 crc kubenswrapper[5110]: I0130 00:28:24.026926 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-d45t8_36a64d1a-1715-4cf4-9f9a-f0641dee2884/cert-manager-cainjector/0.log" Jan 30 00:28:24 crc kubenswrapper[5110]: I0130 00:28:24.104993 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-tkvnv_39ea2e66-5b3d-43cc-b7d9-6f35a914b6dd/cert-manager-webhook/0.log" Jan 30 00:28:33 crc kubenswrapper[5110]: E0130 00:28:33.875411 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:28:39 crc kubenswrapper[5110]: I0130 00:28:39.211023 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:28:39 crc kubenswrapper[5110]: I0130 00:28:39.211905 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:28:40 crc kubenswrapper[5110]: I0130 00:28:40.393877 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-j6qvs_c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995/prometheus-operator/0.log" Jan 30 00:28:40 crc kubenswrapper[5110]: I0130 00:28:40.496507 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6_40f000fa-a4e9-4f45-a846-707d5b5b1643/prometheus-operator-admission-webhook/0.log" Jan 30 00:28:40 crc kubenswrapper[5110]: I0130 00:28:40.576779 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx_ee77ccd5-299e-47f9-ba9b-26e406040a34/prometheus-operator-admission-webhook/0.log" Jan 30 00:28:40 crc kubenswrapper[5110]: I0130 00:28:40.687757 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-4zx86_897cae54-4f71-44e7-a9ee-1ef4558e0432/operator/0.log" Jan 30 00:28:40 crc kubenswrapper[5110]: I0130 00:28:40.738629 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-88w5t_8653dcb1-b9d6-4b22-b7ba-0c91d408836a/perses-operator/0.log" Jan 30 00:28:48 crc kubenswrapper[5110]: E0130 00:28:48.883734 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.235781 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/util/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.334680 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/util/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.383953 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/pull/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.394762 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/pull/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.526547 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/pull/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.527730 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/util/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.550364 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxgtns_fcf6d0ef-21b9-4c57-a8b5-67230aa296d2/extract/0.log" Jan 30 00:28:56 crc kubenswrapper[5110]: I0130 00:28:56.709035 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_320c163c-8d94-4ca5-a17d-b0f3dcc0aa73/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.136251 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_320c163c-8d94-4ca5-a17d-b0f3dcc0aa73/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.142409 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_320c163c-8d94-4ca5-a17d-b0f3dcc0aa73/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.309894 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.492678 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.532092 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/pull/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.537538 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/pull/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.667062 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/util/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.683521 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/extract/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.704784 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xqt9v_4b38b889-3a80-4a92-ac57-00460c3dc1e6/pull/0.log" Jan 30 00:28:57 crc kubenswrapper[5110]: I0130 00:28:57.828799 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/util/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.008577 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/util/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.051454 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/pull/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.051753 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/pull/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.162815 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/util/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.225172 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/pull/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.245012 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nxlmt_c7df63d4-15bd-4b81-b3bf-cf9fe51d275a/extract/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.350083 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-utilities/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.542090 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-utilities/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.591691 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-content/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.596902 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-content/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.748108 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-utilities/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.817954 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/extract-content/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.834556 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzp6n_5fbf6653-173e-4277-8c52-24d58ad8733a/registry-server/0.log" Jan 30 00:28:58 crc kubenswrapper[5110]: I0130 00:28:58.861694 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.006232 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.009005 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.013878 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.198147 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.216313 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.253243 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-9bmtj_7967f41f-db4e-44fc-bdbc-2b67566a8fd9/marketplace-operator/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.368223 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxhq6_6b6ddc39-c7d9-4cc9-b843-c338baeb95f7/registry-server/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.421282 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.567182 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.567670 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.601777 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.740776 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-utilities/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.740981 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/extract-content/0.log" Jan 30 00:28:59 crc kubenswrapper[5110]: I0130 00:28:59.926168 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8znbs_b24ec3cb-77b2-49fd-ae11-4c99a2020581/registry-server/0.log" Jan 30 00:29:01 crc kubenswrapper[5110]: E0130 00:29:01.875287 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:29:09 crc kubenswrapper[5110]: I0130 00:29:09.210265 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:09 crc kubenswrapper[5110]: I0130 00:29:09.211379 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:14 crc kubenswrapper[5110]: I0130 00:29:14.319861 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-j6qvs_c2e6a8d9-325e-4b8d-b9c7-2e9f8f084995/prometheus-operator/0.log" Jan 30 00:29:14 crc kubenswrapper[5110]: I0130 00:29:14.325188 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d88b8c99-8lgq6_40f000fa-a4e9-4f45-a846-707d5b5b1643/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:14 crc kubenswrapper[5110]: I0130 00:29:14.366244 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d88b8c99-ntkvx_ee77ccd5-299e-47f9-ba9b-26e406040a34/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:14 crc kubenswrapper[5110]: I0130 00:29:14.471969 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-4zx86_897cae54-4f71-44e7-a9ee-1ef4558e0432/operator/0.log" Jan 30 00:29:14 crc kubenswrapper[5110]: I0130 00:29:14.506134 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-88w5t_8653dcb1-b9d6-4b22-b7ba-0c91d408836a/perses-operator/0.log" Jan 30 00:29:14 crc kubenswrapper[5110]: E0130 00:29:14.875509 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:29:25 crc kubenswrapper[5110]: E0130 00:29:25.878610 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.210695 5110 patch_prober.go:28] interesting pod/machine-config-daemon-t6dv6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.212024 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.212143 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.213757 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f93629bc351ccac22c13766129bfc4f8d907ab3ad7962fcdf0fb53756bef6d5"} pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.213920 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" podUID="97dc714a-5d84-4c81-99ef-13067437fcad" containerName="machine-config-daemon" containerID="cri-o://7f93629bc351ccac22c13766129bfc4f8d907ab3ad7962fcdf0fb53756bef6d5" gracePeriod=600 Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.664802 5110 generic.go:358] "Generic (PLEG): container finished" podID="97dc714a-5d84-4c81-99ef-13067437fcad" containerID="7f93629bc351ccac22c13766129bfc4f8d907ab3ad7962fcdf0fb53756bef6d5" exitCode=0 Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.664921 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerDied","Data":"7f93629bc351ccac22c13766129bfc4f8d907ab3ad7962fcdf0fb53756bef6d5"} Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.665526 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t6dv6" event={"ID":"97dc714a-5d84-4c81-99ef-13067437fcad","Type":"ContainerStarted","Data":"4efdab988898504d08e631f1a899efca0932c0a518bad713d2d2be804f84b45c"} Jan 30 00:29:39 crc kubenswrapper[5110]: I0130 00:29:39.665554 5110 scope.go:117] "RemoveContainer" containerID="16d027baeb809d2b6203f7c501dfcd3bb2ea4f617acccd63cc39934d62c3ad3e" Jan 30 00:29:40 crc kubenswrapper[5110]: E0130 00:29:40.116058 5110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:29:40 crc kubenswrapper[5110]: E0130 00:29:40.116876 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr_openshift-marketplace(320c163c-8d94-4ca5-a17d-b0f3dcc0aa73): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:29:40 crc kubenswrapper[5110]: E0130 00:29:40.118197 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:29:51 crc kubenswrapper[5110]: E0130 00:29:51.881484 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:29:55 crc kubenswrapper[5110]: I0130 00:29:55.804764 5110 generic.go:358] "Generic (PLEG): container finished" podID="97c1c191-5d38-49eb-9205-8631d974a301" containerID="7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b" exitCode=0 Jan 30 00:29:55 crc kubenswrapper[5110]: I0130 00:29:55.805691 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" event={"ID":"97c1c191-5d38-49eb-9205-8631d974a301","Type":"ContainerDied","Data":"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b"} Jan 30 00:29:55 crc kubenswrapper[5110]: I0130 00:29:55.806574 5110 scope.go:117] "RemoveContainer" containerID="7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b" Jan 30 00:29:56 crc kubenswrapper[5110]: I0130 00:29:56.437452 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8r9xt_must-gather-5z7v4_97c1c191-5d38-49eb-9205-8631d974a301/gather/0.log" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.153431 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495550-fz8bw"] Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.154961 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.154985 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.155196 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="87be9d0c-2b0c-43e8-8fa1-eccd7beb80e2" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.164201 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.170263 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-fz8bw"] Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.171981 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.172534 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.173937 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-6n555\"" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.263006 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtvg\" (UniqueName: \"kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg\") pod \"auto-csr-approver-29495550-fz8bw\" (UID: \"e637e11e-b3b4-40ee-89d3-65d3799ae995\") " pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.263975 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm"] Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.285799 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm"] Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.286074 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.289909 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.289980 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.365648 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.365859 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.366000 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtvg\" (UniqueName: \"kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg\") pod \"auto-csr-approver-29495550-fz8bw\" (UID: \"e637e11e-b3b4-40ee-89d3-65d3799ae995\") " pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.366136 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7nj\" (UniqueName: \"kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.406242 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtvg\" (UniqueName: \"kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg\") pod \"auto-csr-approver-29495550-fz8bw\" (UID: \"e637e11e-b3b4-40ee-89d3-65d3799ae995\") " pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.468535 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.468734 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7nj\" (UniqueName: \"kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.468879 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.471113 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.476998 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.494643 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7nj\" (UniqueName: \"kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj\") pod \"collect-profiles-29495550-z8khm\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.494809 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.618007 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.751447 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-fz8bw"] Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.852295 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" event={"ID":"e637e11e-b3b4-40ee-89d3-65d3799ae995","Type":"ContainerStarted","Data":"9c5e2d63c05901aecc390fb16ff99fa5a1df9893baa23330a26c9e4ce6709aca"} Jan 30 00:30:00 crc kubenswrapper[5110]: I0130 00:30:00.854591 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm"] Jan 30 00:30:00 crc kubenswrapper[5110]: W0130 00:30:00.868025 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01aaa96e_165d_4eb7_9ece_b11d0d24e5a4.slice/crio-f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e WatchSource:0}: Error finding container f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e: Status 404 returned error can't find the container with id f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e Jan 30 00:30:01 crc kubenswrapper[5110]: I0130 00:30:01.863407 5110 generic.go:358] "Generic (PLEG): container finished" podID="01aaa96e-165d-4eb7-9ece-b11d0d24e5a4" containerID="a3fa6424e434619c6acb5cb77a7de583e8f0dca590e8c1afce3bc1c7daec2291" exitCode=0 Jan 30 00:30:01 crc kubenswrapper[5110]: I0130 00:30:01.864066 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" event={"ID":"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4","Type":"ContainerDied","Data":"a3fa6424e434619c6acb5cb77a7de583e8f0dca590e8c1afce3bc1c7daec2291"} Jan 30 00:30:01 crc kubenswrapper[5110]: I0130 00:30:01.864111 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" event={"ID":"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4","Type":"ContainerStarted","Data":"f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e"} Jan 30 00:30:02 crc kubenswrapper[5110]: I0130 00:30:02.772675 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8r9xt/must-gather-5z7v4"] Jan 30 00:30:02 crc kubenswrapper[5110]: I0130 00:30:02.773108 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" podUID="97c1c191-5d38-49eb-9205-8631d974a301" containerName="copy" containerID="cri-o://0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68" gracePeriod=2 Jan 30 00:30:02 crc kubenswrapper[5110]: I0130 00:30:02.780101 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8r9xt/must-gather-5z7v4"] Jan 30 00:30:02 crc kubenswrapper[5110]: I0130 00:30:02.804469 5110 status_manager.go:895] "Failed to get status for pod" podUID="97c1c191-5d38-49eb-9205-8631d974a301" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" err="pods \"must-gather-5z7v4\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-8r9xt\": no relationship found between node 'crc' and this object" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.151197 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.211877 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume\") pod \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.212014 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume\") pod \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.212101 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7nj\" (UniqueName: \"kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj\") pod \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\" (UID: \"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4\") " Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.214085 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4" (UID: "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.217591 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8r9xt_must-gather-5z7v4_97c1c191-5d38-49eb-9205-8631d974a301/copy/0.log" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.218076 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.219313 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4" (UID: "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.220071 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj" (OuterVolumeSpecName: "kube-api-access-7s7nj") pod "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4" (UID: "01aaa96e-165d-4eb7-9ece-b11d0d24e5a4"). InnerVolumeSpecName "kube-api-access-7s7nj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.314120 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgsz8\" (UniqueName: \"kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8\") pod \"97c1c191-5d38-49eb-9205-8631d974a301\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.314314 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output\") pod \"97c1c191-5d38-49eb-9205-8631d974a301\" (UID: \"97c1c191-5d38-49eb-9205-8631d974a301\") " Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.314729 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.314764 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.314785 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7s7nj\" (UniqueName: \"kubernetes.io/projected/01aaa96e-165d-4eb7-9ece-b11d0d24e5a4-kube-api-access-7s7nj\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.320116 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8" (OuterVolumeSpecName: "kube-api-access-sgsz8") pod "97c1c191-5d38-49eb-9205-8631d974a301" (UID: "97c1c191-5d38-49eb-9205-8631d974a301"). InnerVolumeSpecName "kube-api-access-sgsz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.372715 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "97c1c191-5d38-49eb-9205-8631d974a301" (UID: "97c1c191-5d38-49eb-9205-8631d974a301"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.415959 5110 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97c1c191-5d38-49eb-9205-8631d974a301-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.416022 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sgsz8\" (UniqueName: \"kubernetes.io/projected/97c1c191-5d38-49eb-9205-8631d974a301-kube-api-access-sgsz8\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.882750 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8r9xt_must-gather-5z7v4_97c1c191-5d38-49eb-9205-8631d974a301/copy/0.log" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.884186 5110 generic.go:358] "Generic (PLEG): container finished" podID="97c1c191-5d38-49eb-9205-8631d974a301" containerID="0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68" exitCode=143 Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.884476 5110 scope.go:117] "RemoveContainer" containerID="0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.884746 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r9xt/must-gather-5z7v4" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.891715 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" event={"ID":"01aaa96e-165d-4eb7-9ece-b11d0d24e5a4","Type":"ContainerDied","Data":"f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e"} Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.891773 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f226319e739552a47491d770e15aaf0f7774fe591a098d33f60fd23cf04a992e" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.891871 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-z8khm" Jan 30 00:30:03 crc kubenswrapper[5110]: I0130 00:30:03.919102 5110 scope.go:117] "RemoveContainer" containerID="7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.078605 5110 scope.go:117] "RemoveContainer" containerID="0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68" Jan 30 00:30:04 crc kubenswrapper[5110]: E0130 00:30:04.079663 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68\": container with ID starting with 0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68 not found: ID does not exist" containerID="0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.079716 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68"} err="failed to get container status \"0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68\": rpc error: code = NotFound desc = could not find container \"0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68\": container with ID starting with 0ffb7b77594a16d68b01c32dfda158ee5ef4869155ba49e83eabdbf867074d68 not found: ID does not exist" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.079749 5110 scope.go:117] "RemoveContainer" containerID="7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b" Jan 30 00:30:04 crc kubenswrapper[5110]: E0130 00:30:04.080186 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b\": container with ID starting with 7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b not found: ID does not exist" containerID="7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.080221 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b"} err="failed to get container status \"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b\": rpc error: code = NotFound desc = could not find container \"7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b\": container with ID starting with 7e6c8ca13bddb3bd1c393dca7d9f0ea85ad182e76146f05af08b5fceda14215b not found: ID does not exist" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.882317 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c1c191-5d38-49eb-9205-8631d974a301" path="/var/lib/kubelet/pods/97c1c191-5d38-49eb-9205-8631d974a301/volumes" Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.903034 5110 generic.go:358] "Generic (PLEG): container finished" podID="e637e11e-b3b4-40ee-89d3-65d3799ae995" containerID="97dfb3d4db9ca16decd0bc92f198e3db5871eb2122e6e312a2247a68d6650fdc" exitCode=0 Jan 30 00:30:04 crc kubenswrapper[5110]: I0130 00:30:04.903128 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" event={"ID":"e637e11e-b3b4-40ee-89d3-65d3799ae995","Type":"ContainerDied","Data":"97dfb3d4db9ca16decd0bc92f198e3db5871eb2122e6e312a2247a68d6650fdc"} Jan 30 00:30:05 crc kubenswrapper[5110]: E0130 00:30:05.875875 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.261885 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.363486 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtvg\" (UniqueName: \"kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg\") pod \"e637e11e-b3b4-40ee-89d3-65d3799ae995\" (UID: \"e637e11e-b3b4-40ee-89d3-65d3799ae995\") " Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.374305 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg" (OuterVolumeSpecName: "kube-api-access-cxtvg") pod "e637e11e-b3b4-40ee-89d3-65d3799ae995" (UID: "e637e11e-b3b4-40ee-89d3-65d3799ae995"). InnerVolumeSpecName "kube-api-access-cxtvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.465776 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cxtvg\" (UniqueName: \"kubernetes.io/projected/e637e11e-b3b4-40ee-89d3-65d3799ae995-kube-api-access-cxtvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.924982 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" event={"ID":"e637e11e-b3b4-40ee-89d3-65d3799ae995","Type":"ContainerDied","Data":"9c5e2d63c05901aecc390fb16ff99fa5a1df9893baa23330a26c9e4ce6709aca"} Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.925062 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5e2d63c05901aecc390fb16ff99fa5a1df9893baa23330a26c9e4ce6709aca" Jan 30 00:30:06 crc kubenswrapper[5110]: I0130 00:30:06.925183 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-fz8bw" Jan 30 00:30:07 crc kubenswrapper[5110]: I0130 00:30:07.338846 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-hvwrh"] Jan 30 00:30:07 crc kubenswrapper[5110]: I0130 00:30:07.347807 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-hvwrh"] Jan 30 00:30:08 crc kubenswrapper[5110]: I0130 00:30:08.889010 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931d619a-309f-4ffe-bc9f-93097cdf6afe" path="/var/lib/kubelet/pods/931d619a-309f-4ffe-bc9f-93097cdf6afe/volumes" Jan 30 00:30:16 crc kubenswrapper[5110]: E0130 00:30:16.881047 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:30:19 crc kubenswrapper[5110]: I0130 00:30:19.973517 5110 scope.go:117] "RemoveContainer" containerID="818ede0477bdf0de808b61b61fe20baecb4d1e551b6b252700eeeea6e1fec40b" Jan 30 00:30:31 crc kubenswrapper[5110]: E0130 00:30:31.876021 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:30:44 crc kubenswrapper[5110]: E0130 00:30:44.875631 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:30:56 crc kubenswrapper[5110]: E0130 00:30:56.876820 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:31:08 crc kubenswrapper[5110]: E0130 00:31:08.878638 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73" Jan 30 00:31:22 crc kubenswrapper[5110]: E0130 00:31:22.876287 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev76cr" podUID="320c163c-8d94-4ca5-a17d-b0f3dcc0aa73"